id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1739300256
|
ROS_DOMAIN_IDをセット
同一ネットワーク内で衝突するのを回避するため
#7
|
gharchive/issue
| 2023-06-03T08:57:43 |
2025-04-01T06:40:48.361768
|
{
"authors": [
"uhobeike"
],
"repo": "uhobeike/ros2_humble_install_script",
"url": "https://github.com/uhobeike/ros2_humble_install_script/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2174466091
|
Not getting unlocked status event
Hi.
Well done for creating this, I realise some features are a bit rough, but I have been able to configure it for our building which has 3 floors with a controller on each floor.
With all of the doors, I am able to unlock them using uhppoted.door.{door}.unlock and the uhppoted.door.{door}.lock status shows unlocked until the next poll. However the uhppoted.door.{door}.unlocked.event appears to never fire for any door and the status is always shown as Unknown. Any ideas?
All of my controllers are V6.62.51215 which I realise is pretty old...
One other thing that would be good to know, is there anyway to import the existing swipe records from a controller?
Good work so far though.
Phil
To update this, after a bit more research, it appears that 'some' doors are working and generating the unlocked events. Of our 9 doors, 3 seem to work. There seems to be no pattern though (different numbers doors on different controllers?)
Oh dear .. I guess this means I'm going to have to remove the bit in the README which says "... probably not suitable for office buildings" :-).
Hmm, v6.62.51215 firmware should be ok - it's the earliest version that is known to work, although it does have some oddities that were fixed in later versions. But I've just checked and the code is in place to handle those.
Oh .. oops! Best guess is that the controllers are probably not configured to send events to the Home Assistant machine and uhppoted-app-home-assistant doesn't configure them automatically either. I need to think about how best to do that - but if you want to configure them manually in the interim, the easiest is probably the CLI set-listener command:
set-listener <controller> <address:port>
e.g. uhppote-cli set-listener 405419896 192.168.1.100:60001
The reason you're seeing occasional events is that there is a background process that retrieves "missing" events from the controller, but the interaction with the Home Assistant poll cycle and event handling makes it more than a bit unintuitive.
re. existing swipe records. It's possible except that I have no idea where to store them Home Assistant (am open to suggestions though). I did experiment a bit but hass timestamps events when it receives them so the built-in event logs make no sense whatever doing it that way.
Perfect, set-listener has sorted it! I assume the controllers only send an unlock event as the doors show as unlocked until the next poll (30 seconds) is this correct?
I was a bit concerned about using 'early Alpha' software but since the Windows software stopped working, I have been managing them manually using the internal web interface on each controller, which is a bit painful, so I thought even if it is only 1/2 working, it must be better than that!
I assume the set-listener setting is persistent on the controllers (or do I need to set it again after a controller reboot?)
I will have a think about the existing swipe records, but I am a bit new to HA so still learning.
Thanks again
Phil
Oh great!! Very glad it was that and not something completely weird, and yes, you're completely correct - the controller only generates an unlocked event, the locked event is a synthetic event that is generated from the poll data.
The controller event listener setting is persistent across power cycles, etc, so it only needs to be set once - although I may add a controller-unconfigured event/entity/somesuch so that at least there is some kind of notification if it isn't set. Still thinking about how best to do it though...
re. early Alpha. <smile> I shall try not to break anything.
re. I am a bit new to HA so still learning. Likewise - it's been quite a learning curve :-)
I'm going to leave this issue open until I have something in place to set/monitor the controller event listener setting - will post an update here when it's available.
Hi,
The wording of the log message is a bit misleading - it is updating the Home Assistant controller date/time entity with the controller date/time every 30 seconds (or whatever poll interval you choose to configure). The actual controller date/time is only set when you change it via the uhppoted.controller.{controller}.datetime entity. I'll fix the log message to read something like "fetching controller date/time" to avoid confusion.
re. who/what/when. The controller doesn't natively support access queries because it's only a small microcontroller with limited capability. It's also reasonably unlikely that the Home Assistant custom component ever will - hass just doesn't include the UI elements to implement something like that (at least not at the moment anyway).
A tool to query and analyse the event logs is on the TODO list but it's not going to happen anytime soon barring a 3AM stroke of genius - it is surprisingly complicated and difficult to do well! In the interim however, there are a couple of other other options:
use the CLI get-event command to retrieve events from the controllers and store them in a database that you can query on an ad hoc basis. There is a cookbook example of how to do that using a bash script and sqlite3 here.
uhppoted-app-db has a get-events command which does the same thing but with a whole lot less fuss. You just need to schedule the command to run every 30 minutes or so to fetch events to the DB of your choice.
uhppoted-httpd can display a user friendly view of the event log - although going by the version of your controllers you're going to have a lot of events which translates to a lot of scrolling. You're also not going to be able to run httpd and the Home Assistant custom component concurrently (well, you can but it's complicated and not recommended)
Hi,
I'm going to close this out - the latest alpha release includes the ability to configure the destination address for controller events and will set it automatically if it is incorrect.
Thanks for braving the early alpha release :-)
|
gharchive/issue
| 2024-03-07T18:14:44 |
2025-04-01T06:40:48.376501
|
{
"authors": [
"m0vse",
"uhppoted"
],
"repo": "uhppoted/uhppoted-app-home-assistant",
"url": "https://github.com/uhppoted/uhppoted-app-home-assistant/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1637916827
|
Files with TS only assets still get transpiled
Problem: Suppose a TS app has a interfaces directory. After transpilation to JS, those files are too transpiled but left empty. UI5 tooling then bundles them but that's another thing.
The predefined setup does not handle this case out-of-the box. I've searched for babel plugins / settings but the results were not very good.
Babel does have ignore property, but when --copy-files is enabled, then the interfaces are properly not processed but still copied in the dist, which is bad. There's also this issue but the additional flag also does not function perfectly.
To handle the issue therefore an additional build step can be added (e.g filtering or deleting unneeded files).
Before doing this however, I wanted to know whether there's possibility to add this behavior in this plugin. Otherwise additional customization steps for the building of TS UI5 apps need to be taken.
Thanks for any info in advance & BR
Hi @dfenerski ,
why does it need to be a .ts file at all? Can't it just be a .d.ts file? The ui5-tooling-transpile doesn't copy the .d.ts files when the generateDts option is disabled.
In general, if you have files only including type definitions, AFAICS, the recommendation is to use .d.ts files (https://stackoverflow.com/questions/37263357/how-to-declare-and-import-typescript-interfaces-in-a-separate-file). I found a small bug in the ui5-tooling-transpile in case of generateDts the relative names need to be made absolute. But this one I tackle independently.
WDYT?
Cheers, Peter
BTW: for the issue in ui5-tooling-transpile, I created an issue: https://github.com/ui5-community/ui5-ecosystem-showcase/issues/743
Thanks for reaching out @petermuessig,
the described solution does indeed solve the problem, so I think this issue can be closed.
In my case however we also have TS for the backend so implementing this would require changing some naming conventions for all **/interfaces/**/*.ts & **/types/**/*.ts files across the codebase to have .d.ts extension instead of the current .ts one. Filtering those files in the builder section of ui5.yaml files has in the same time worked out fine.
Maybe in the future the .d.ts solution is to be implemented, however I would like to research it some more beforehand (read more about ambient modules).
Thanks & BR,
Dimitar
|
gharchive/issue
| 2023-03-23T16:58:48 |
2025-04-01T06:40:48.384870
|
{
"authors": [
"dfenerski",
"petermuessig"
],
"repo": "ui5-community/babel-plugin-transform-modules-ui5",
"url": "https://github.com/ui5-community/babel-plugin-transform-modules-ui5/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
193244295
|
New lines in multi-line literal strings trimmed incorrectly
The library is incorrectly parsing/generating code from multi-line literal strings by condensing multiple new lines in the string into a single one.
Example:
EXPECTED:
import toml
toml.loads("a = '''\nhello\nworld\n\n\nbye'''")
{u'a': u'hello\nworld\n\nbye'}
ACTUAL:
import toml
toml.loads("a = '''\nhello\nworld\n\n\nbye'''")
{u'a': u'hello\nworld\nbye'}
We expect two new lines between world and bye, but we get one. According to the specification (emphasis mine):
Multi-line literal strings are surrounded by three single quotes on each side and allow newlines. Like literal strings, there is no escaping whatsoever. A newline immediately following the opening delimiter will be trimmed. All other content between the delimiters is interpreted as-is without modification.
I'll take a look at the code and propose a fix.
It looks like there's a test that contradicts the specification in @BurntSushi's tests. In particular, this test with this expected result. @avakar doesn't have a test case for this. I've fixed this issue in the Python script and will submit a PR. I believe @BurntSushi's test case is incorrect.
From the spec:
For writing long strings without introducing extraneous whitespace, use a "line ending backslash". When the last non-whitespace character on a line is a \, it will be trimmed along with all whitespace (including newlines) up to the next non-whitespace character or closing delimiter.
Closing as WONTFIX due to compliance with the spec.
@uiri That line of the spec is entirely irrelevant to this issue.
@uiri I totally aggree with @SergioBenitez
I'm sorry. You're right; it is relevant to the associated PR(s) but not to this issue.
|
gharchive/issue
| 2016-12-02T23:36:55 |
2025-04-01T06:40:48.417047
|
{
"authors": [
"SergioBenitez",
"noqqe",
"uiri"
],
"repo": "uiri/toml",
"url": "https://github.com/uiri/toml/issues/68",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1754733704
|
Logo for the documentation site?
I chose a simple logo almost at random just to have something at the top left of the docs site:
But it's boring and not really representative of the library, so if anyone has any ideas for a better logo - let's hear them!
Ah OK cool. Well maybe we don't need to change it then!
boring logo is good. it won't attract the wrong attention.
|
gharchive/issue
| 2023-06-13T12:06:48 |
2025-04-01T06:40:48.418890
|
{
"authors": [
"VJ911",
"pacharanero"
],
"repo": "uk-fci/nhs-number",
"url": "https://github.com/uk-fci/nhs-number/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
337576352
|
Delete ES docs on django delete
Issue number: N/A
Description of change
This:
adds a context manager that collects and deletes the objects from ES when they are deleted from the db
creates a search app that can be used to test generic logic
Checklist
[ ] Have any relevant search models been updated?
[ ] Have any relevant fixtures (fixtures/test_data.yaml) been updated?
[ ] Have any relevant select-/prefetch-related field lists in the views and search apps been updated?
[ ] Has the admin site been updated (for new models, fields etc.)?
[ ] Has the README been updated (if needed)?
Codecov Report
Merging #1011 into develop will increase coverage by 0.02%.
The diff coverage is 100%.
@@ Coverage Diff @@
## develop #1011 +/- ##
===========================================
+ Coverage 96.41% 96.44% +0.02%
===========================================
Files 206 207 +1
Lines 5753 5799 +46
Branches 544 551 +7
===========================================
+ Hits 5547 5593 +46
Misses 142 142
Partials 64 64
Impacted Files
Coverage Δ
...ahub/cleanup/management/commands/delete_orphans.py
100% <100%> (ø)
:arrow_up:
datahub/search/deletion.py
100% <100%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d2a7e01...45a3061. Read the comment docs.
Something else to consider is that the interaction search app does have a post_delete signal receiver, so we might need to disconnect some existing signal receiver while a Collector is active as well...
Something else to consider is that the interaction search app does have a post_delete signal receiver, so we might need to disconnect some existing signal receiver while a Collector is active as well...
True :/
|
gharchive/pull-request
| 2018-07-02T16:00:27 |
2025-04-01T06:40:48.469684
|
{
"authors": [
"codecov-io",
"marcofucci",
"reupen"
],
"repo": "uktrade/data-hub-leeloo",
"url": "https://github.com/uktrade/data-hub-leeloo/pull/1011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
534649363
|
Not able to decrypt using jasypt maven plugin with 3.0.0
My application-local.properties file is located under ./src/main/resources.
Encryptor password is correct. The application starts fine and I can confirm the encrypted password is decrypted into its correct string.
I have not been able to get this utility to work.
mvn jasypt:decrypt -Djasypt.encryptor.password="MXyXswPQkxRanB3VXNMgYY2JJRYngE?aun(bHkVJK6PN4aYjT}/^ueLtex(R78W4" -Dspring.profiles.active=local -Djasypt.encryptor.algorithm=PBEWithMD5AndTripleDES
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< com.protractinator:protractinator-web-api >-------------------
[INFO] Building Protractinator Web API (Spring Boot) 11.0.27-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- jasypt-maven-plugin:3.0.0:decrypt (default-cli) @ protractinator-web-api ---
[INFO] Starting MavenCli v3.6.3 on crash.local with PID 1618 (/usr/local/Cellar/maven/3.6.3/libexec/lib/maven-embedder-3.6.3.jar started by crash in /Users/crash/git/protractinator/protractinator-web-api)
[INFO] The following profiles are active: local
[INFO] Post-processing PropertySource instances
[INFO] Converting PropertySource configurationProperties [org.springframework.boot.context.properties.source.ConfigurationPropertySourcesPropertySource] to AOP Proxy
[INFO] Converting PropertySource systemProperties [org.springframework.core.env.PropertiesPropertySource] to EncryptableMapPropertySourceWrapper
[INFO] Converting PropertySource systemEnvironment [org.springframework.boot.env.SystemEnvironmentPropertySourceEnvironmentPostProcessor$OriginAwareSystemEnvironmentPropertySource] to EncryptableSystemEnvironmentPropertySourceWrapper
[INFO] Converting PropertySource random [org.springframework.boot.env.RandomValuePropertySource] to EncryptablePropertySourceWrapper
[INFO] Property Filter custom Bean not found with name 'encryptablePropertyFilter'. Initializing Default Property Filter
[INFO] Started MavenCli in 0.69 seconds (JVM running for 3.901)
[INFO] Decrypting file src/main/resources/application-local.properties
[INFO] String Encryptor custom Bean not found with name 'jasyptStringEncryptor'. Initializing Default String Encryptor
[INFO] Property Resolver custom Bean not found with name 'encryptablePropertyResolver'. Initializing Default Property Resolver
[INFO] Property Detector custom Bean not found with name 'encryptablePropertyDetector'. Initializing Default Property Detector
[INFO] Encryptor config not found for property jasypt.encryptor.key-obtention-iterations, using default value: 1000
[INFO] Encryptor config not found for property jasypt.encryptor.pool-size, using default value: 1
[INFO] Encryptor config not found for property jasypt.encryptor.provider-name, using default value: null
[INFO] Encryptor config not found for property jasypt.encryptor.provider-class-name, using default value: null
[INFO] Encryptor config not found for property jasypt.encryptor.salt-generator-classname, using default value: org.jasypt.salt.RandomSaltGenerator
[INFO] Encryptor config not found for property jasypt.encryptor.iv-generator-classname, using default value: org.jasypt.iv.RandomIvGenerator
[INFO] Encryptor config not found for property jasypt.encryptor.string-output-type, using default value: base64
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.936 s
[INFO] Finished at: 2019-12-09T08:47:08+07:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.ulisesbocchio:jasypt-maven-plugin:3.0.0:decrypt (default-cli) on project protractinator-web-api: Execution default-cli of goal com.github.ulisesbocchio:jasypt-maven-plugin:3.0.0:decrypt failed.: EncryptionOperationNotPossibleException -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
See https://github.com/ulisesbocchio/jasypt-spring-boot/blob/master/README.md#update-11242019-version-300-release-includes, you need
jasypt:
encryptor:
algorithm: PBEWithMD5AndDES
iv-generator-classname: org.jasypt.iv.NoIvGenerator
to use your existing code
are you passing the password to the mvn command?
Hi @ulisesbocchio . Yes, passing the password as shown above.
Hi @lz1asl . Thank you. That did the trick.
This works:
mvn jasypt:decrypt -Djasypt.encryptor.password="MXyXswPQkxRanB3VXNMgYY2JJRYngE?aun(bHkVJK6PN4aYjT}/^ueLtex(R78W4" -Dspring.profiles.active=local -Djasypt.encryptor.algorithm=PBEWithMD5AndTripleDES -Djasypt.encryptor.iv-generator-classname=org.jasypt.iv.NoIvGenerator
|
gharchive/issue
| 2019-12-09T02:11:18 |
2025-04-01T06:40:48.478825
|
{
"authors": [
"bjornharvold",
"lz1asl",
"ulisesbocchio"
],
"repo": "ulisesbocchio/jasypt-spring-boot",
"url": "https://github.com/ulisesbocchio/jasypt-spring-boot/issues/177",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2282125782
|
How to tell the Matrix server to create a new profile?
Hi,
As the title says, I read through the documentation and issues here, but unless I'm missing something, I couldn't find how to do so.
Thanks!
Creating accounts through iamb is not currently supported. It is tracked under issue #258.
Gotcha
|
gharchive/issue
| 2024-05-07T02:07:17 |
2025-04-01T06:40:48.686081
|
{
"authors": [
"cig0",
"mordquist"
],
"repo": "ulyssa/iamb",
"url": "https://github.com/ulyssa/iamb/issues/282",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1585251431
|
ERR! ERESOLVE unable to resolve dependency tree
I just tried to update, so pulled the latest codebase, and ran:
$ sudo -u umami npm install
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: umami@1.40.0
npm ERR! Found: react@17.0.2
npm ERR! node_modules/react
npm ERR! react@"^17.0.0" from the root project
npm ERR! peer react@"^17.0.2 || ^18.0.0-0" from next@12.3.4
npm ERR! node_modules/next
npm ERR! next@"^12.3.1" from the root project
npm ERR! peer next@"^12.2.5" from next-basics@0.18.0
npm ERR! node_modules/next-basics
npm ERR! next-basics@"^0.18.0" from the root project
npm ERR! 1 more (react-dom)
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"^18.2.0" from next-basics@0.18.0
npm ERR! node_modules/next-basics
npm ERR! next-basics@"^0.18.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /home/umami/.npm/eresolve-report.txt for a full report.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/umami/.npm/_logs/2023-02-15T05_28_58_684Z-debug-0.log
If only npm could manage to write lucid error reports in moderately natural language. I cannot decipher this without some upskilling, but clearly something to do with react and next-basics.
Any tips for someone keen to understand/learn and or how a dependency tree can be broken?
Have you tried using yarn instead?
No. No experience with that. How do I do that?
Worked it out. That updated fine, built fin and starts fine. Alas Now when I try to log in:
Feb 16 12:13:30 shelob yarn[259269]: The column `account.account_uuid` does not exist in the current database.
Feb 16 12:13:30 shelob yarn[259269]: at RequestHandler.handleRequestError (/home/umami/umami/node_modules/@prisma/client/runtime/index.js:3194>
Feb 16 12:13:30 shelob yarn[259269]: at RequestHandler.handleAndLogRequestError (/home/umami/umami/node_modules/@prisma/client/runtime/index.j>
Feb 16 12:13:30 shelob yarn[259269]: at RequestHandler.request (/home/umami/umami/node_modules/@prisma/client/runtime/index.js:31908:12)
Feb 16 12:13:30 shelob yarn[259269]: at async PrismaClient._request (/home/umami/umami/node_modules/@prisma/client/runtime/index.js:32994:16)
Feb 16 12:13:30 shelob yarn[259269]: at async __WEBPACK_DEFAULT_EXPORT__ (/home/umami/umami/.next/server/pages/api/auth/login.js:107:21)
Feb 16 12:13:30 shelob yarn[259269]: at async Object.apiResolver (/home/umami/umami/node_modules/next/dist/server/api-utils/node.js:366:9)
Feb 16 12:13:30 shelob yarn[259269]: at async NextNodeServer.runApi (/home/umami/umami/node_modules/next/dist/server/next-server.js:481:9)
Feb 16 12:13:30 shelob yarn[259269]: at async Object.fn (/home/umami/umami/node_modules/next/dist/server/next-server.js:735:37)
Feb 16 12:13:30 shelob yarn[259269]: at async Router.execute (/home/umami/umami/node_modules/next/dist/server/router.js:247:36)
Feb 16 12:13:30 shelob yarn[259269]: at async NextNodeServer.run (/home/umami/umami/node_modules/next/dist/server/base-server.js:347:29) {
Feb 16 12:13:30 shelob yarn[259269]: code: 'P2022',
Feb 16 12:13:30 shelob yarn[259269]: clientVersion: '4.9.0',
Feb 16 12:13:30 shelob yarn[259269]: meta: { column: 'account.account_uuid' },
Feb 16 12:13:30 shelob yarn[259269]: batchRequestIdx: undefined
Feb 16 12:13:30 shelob yarn[259269]: }
Oh the joys. I shall drill down I guess. Silly me should have backed that database up ;-)
Looking into the database with psql, I see everything looks good. All my data seems intact ;-) but:
umami=# \d account
Table "public.account"
Column | Type | Collation | Nullable | Default
------------+--------------------------+-----------+----------+------------------------------------------
user_id | integer | | not null | nextval('account_user_id_seq'::regclass)
username | character varying(255) | | not null |
password | character varying(60) | | not null |
is_admin | boolean | | not null | false
created_at | timestamp with time zone | | | CURRENT_TIMESTAMP
updated_at | timestamp with time zone | | | CURRENT_TIMESTAMP
Indexes:
"account_pkey" PRIMARY KEY, btree (user_id)
"account_username_key" UNIQUE CONSTRAINT, btree (username)
Referenced by:
TABLE "website" CONSTRAINT "website_user_id_fkey" FOREIGN KEY (user_id) REFERENCES account(user_id) ON UPDATE CASCADE ON DELETE RESTRICT
reveals account_uuid is missing. Indeed. It seems in the yard build a migration was missed. Is this indicative of a deeper problem? I could create a single column, and even populate it (only have two accounts on the system). But it is worrying that I saw yarn apply migrations but it seems to have missed one?
And before I fiddles with schema by hand I'd want to know what else may have changed on it.
But guess what:
https://github.com/umami-software/umami/blob/master/sql/schema.postgresql.sql
It's not even in the schema! Dear me.
It looks like this migration:
https://github.com/umami-software/umami/blob/aceb904398b527658676879b11ff22135b7cbaf8/db/postgresql/migrations/04_add_uuid/migration.sql
was not applied!
So now, I need to work out which migrations have been applied and how to check and apply migrations.
Done. Inspected migrations 1 to 3 and confirmed they are all in my schema already, then ran migration 4 with psql and now I can log in again and it looks like I'm updated.
|
gharchive/issue
| 2023-02-15T05:34:29 |
2025-04-01T06:40:48.692993
|
{
"authors": [
"bernd-wechner",
"mikecao"
],
"repo": "umami-software/umami",
"url": "https://github.com/umami-software/umami/issues/1786",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
15431868
|
Panel keeps on collapsing on touch of the views inside it (button)
Hi,
First of all, thanks for this one. =)
I just wanna ask, I tried running the Demo app included in the project and then added some buttons inside the sliding panel but I can't click on the buttons, but instead the panel keeps on hiding.
Am I missing something here?
Thanks and regards!
Hello,
mPanelLayout.setDragView(this.findViewById(R.id.ivPullUp));
This method is not working for me. I have tried to set my whole view as dragview still it's not working. Please update for same.
|
gharchive/issue
| 2013-06-12T02:22:47 |
2025-04-01T06:40:48.695359
|
{
"authors": [
"anandraj16992",
"rendecano"
],
"repo": "umano/AndroidSlidingUpPanel",
"url": "https://github.com/umano/AndroidSlidingUpPanel/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1197412934
|
Bring implementations up-to-date
With the new spec, we have to update all implementations
[ ] #47
[ ] #48
[ ] #49
[ ] #50
[ ] #51
[ ] #52
[ ] #53
[ ] #54
[ ] #58
[ ] #62
[x] #55
Can we at first fix the client so we can actually test, whether the implementations are working?
We have to implement the new functionality in the client anyway. Let me once again refer you to #57
With new additions to the spec, more updates are needed #69
|
gharchive/issue
| 2022-04-08T14:48:49 |
2025-04-01T06:40:48.766664
|
{
"authors": [
"tectrixer",
"umgefahren"
],
"repo": "umgefahren/server-language-benchmark",
"url": "https://github.com/umgefahren/server-language-benchmark/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1229061790
|
u-form-item在小程序无法进行验证
版本
1.8.5
转载链接
xn--bug-x28doa2596aqocv12e.com
重现步骤
新建一个表单,直接设置自定义规则,在h5可以正常校验,在小程序不行,校验结果是false
期望的结果是什么?
你猜
实际的结果是什么?
你猜我为什么提bug
表单页面
真服了这个提bug的页面了,怎么你们公司都不需要本地测试就敢上线嘛
还有那个加群的,一个空白的问题,官方文档都找不到的加群答案
uview不是一个公司
|
gharchive/issue
| 2022-05-09T01:44:28 |
2025-04-01T06:40:48.770284
|
{
"authors": [
"956632862",
"yatoku"
],
"repo": "umicro/uView",
"url": "https://github.com/umicro/uView/issues/1230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2397026607
|
你的功能建议是否和某个问题相关?
我想在BubbleMenu里面配置自定义的拓展,无法配置
你希望看到什么解决方案?
你考虑过哪些替代方案?
No response
你有其他上下文或截图吗?
No response
其他说明
No response
意向参与贡献
[X] 我有意向参与具体功能的开发实现并将代码贡献回到上游社区
@qtch 已收到您的建议
咱们没提供自定义配置哎 只能禁用和启用气泡菜单
|
gharchive/issue
| 2024-07-09T03:30:39 |
2025-04-01T06:40:48.796808
|
{
"authors": [
"qtch",
"umodoc"
],
"repo": "umodoc/editor",
"url": "https://github.com/umodoc/editor/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
487794006
|
Composer usage
How is this package meant to be used?
Possibly add some examples?
I tried to use bin/build, but got error.
php vendor/umpirsky/language-list/bin/build
Warning: include(/var/www/vendor/umpirsky/language-list/bin/../vendor/autoload.php): failed to open stream: No such file or directory in /var/www/vendor/umpirsky/language-list/bin/build on line 4
Warning: include(): Failed opening '/var/www/vendor/umpirsky/language-list/bin/../vendor/autoload.php' for inclusion (include_path='.:/usr/local/lib/php') in /var/www/vendor/umpirsky/language-list/bin/build on line 4
Fatal error: Uncaught Error: Class 'Umpirsky\ListGenerator\Importer\Importer' not found in /var/www/vendor/umpirsky/language-list/bin/build:6
Stack trace:
#0 {main}
thrown in /var/www/vendor/umpirsky/language-list/bin/build on line 6
Just run php composer.phar install.
For more information, see https://getcomposer.org/doc/01-basic-usage.md#installing-dependencies
Sorry, let me explain some more.
I have installed this package with composer composer require umpirsky/language-list. It is installed in vendor.
My question is how the package is meant to be consumed after it has been installed.
Describe your use case please. You can use it in many ways, one of them is $languages = include 'language-list/data/en/language.php'.
|
gharchive/issue
| 2019-08-31T17:41:45 |
2025-04-01T06:40:48.799886
|
{
"authors": [
"jsgv",
"umpirsky"
],
"repo": "umpirsky/language-list",
"url": "https://github.com/umpirsky/language-list/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2462249997
|
Revisão Geral da Documentação e Registro de Pendências
Descrição
A documentação do projeto precisa de uma revisão geral para garantir que todas as informações estejam atualizadas e que não existam seções obsoletas ou incorretas. O responsável por esta tarefa deve revisar toda a documentação, identificar o que precisa ser alterado ou removido, e trazer para a equipe uma lista de melhorias recomendadas.
Tarefas
[x] Revisar a documentação atual: Analisar toda a documentação do projeto para identificar informações desatualizadas, incorretas ou que precisem de ajustes.
[x] Anotar pendências: Registrar todas as pendências encontradas durante a revisão, incluindo o que precisa ser alterado, atualizado ou removido.
[x] Identificar conteúdo obsoleto: Listar seções ou tópicos que estão obsoletos e precisam ser removidos ou reformulados.
[x] Sugerir melhorias: Preparar uma lista de melhorias para a documentação, incluindo novas seções que possam ser úteis, ajustes de linguagem, formatação e estrutura.
[x] Compartilhar com a equipe: Apresentar as pendências e sugestões de melhoria para a equipe em uma reunião ou documento compartilhado, para discussão e priorização das ações necessárias.
[x] Atualizar a documentação: Após a aprovação das mudanças pela equipe, realizar as atualizações e ajustes necessários na documentação.
A revisãso da documentação foi concluida e resolvemos os problemas levantados na ultima realise.
|
gharchive/issue
| 2024-08-13T02:32:28 |
2025-04-01T06:40:48.805161
|
{
"authors": [
"Gxaite",
"manuvaladares"
],
"repo": "unb-mds/2024-1-MinasDeCultura",
"url": "https://github.com/unb-mds/2024-1-MinasDeCultura/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2685283803
|
feat/59
Desenvolvimento do código para o design do front-end.
issue #59
O código está funcionando corretamente.
|
gharchive/pull-request
| 2024-11-23T03:55:13 |
2025-04-01T06:40:48.806741
|
{
"authors": [
"BrzGab",
"JuliaGabP"
],
"repo": "unb-mds/Squad13",
"url": "https://github.com/unb-mds/Squad13/pull/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1254135641
|
Cut is broken
Can no longer cut (1.4.3). Text disappears but is not in the clipboard (Windows). Perhaps related to plain text paste fix?
@elsylambert can you confirm?
@birchamp Verified, as stated in the issue description Text is removed on CTRL + X but is not on the clipboard , so the Paste does not work. Both in MAC and windows.
@mandolyte
|
gharchive/issue
| 2022-05-31T16:46:56 |
2025-04-01T06:40:48.850437
|
{
"authors": [
"BincyJ",
"birchamp",
"deferredreward",
"elsylambert"
],
"repo": "unfoldingWord/tc-create-app",
"url": "https://github.com/unfoldingWord/tc-create-app/issues/1292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1953168038
|
🛑 Unforest is down
In 485b8fa, Unforest (https://www.unforest.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Unforest is back up in d55ddaf after 11 minutes.
|
gharchive/issue
| 2023-10-19T22:42:10 |
2025-04-01T06:40:48.855449
|
{
"authors": [
"unforest"
],
"repo": "unforest/uptime",
"url": "https://github.com/unforest/uptime/issues/1282",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1955448687
|
🛑 Libreddit is down
In b29ab1e, Libreddit (https://libreddit.unforest.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit is back up in a7781ed after 58 minutes.
|
gharchive/issue
| 2023-10-21T11:23:07 |
2025-04-01T06:40:48.857803
|
{
"authors": [
"unforest"
],
"repo": "unforest/uptime",
"url": "https://github.com/unforest/uptime/issues/1626",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1943727164
|
🛑 Libreddit is down
In dcbfc8b, Libreddit (https://libreddit.unforest.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit is back up in dd4fc61 after 12 minutes.
|
gharchive/issue
| 2023-10-15T05:31:34 |
2025-04-01T06:40:48.860156
|
{
"authors": [
"unforest"
],
"repo": "unforest/uptime",
"url": "https://github.com/unforest/uptime/issues/252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1968271548
|
🛑 SearXNG is down
In ccf977d, SearXNG (https://searxng.unforest.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SearXNG is back up in 8aff4ab after 52 minutes.
|
gharchive/issue
| 2023-10-30T12:45:14 |
2025-04-01T06:40:48.862677
|
{
"authors": [
"unforest"
],
"repo": "unforest/uptime",
"url": "https://github.com/unforest/uptime/issues/3607",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1988710338
|
🛑 Rimgo is down
In 8c000a2, Rimgo (https://rimgo.unforest.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Rimgo is back up in c58c537 after 14 minutes.
|
gharchive/issue
| 2023-11-11T04:09:56 |
2025-04-01T06:40:48.865062
|
{
"authors": [
"unforest"
],
"repo": "unforest/uptime",
"url": "https://github.com/unforest/uptime/issues/6034",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1993769750
|
🛑 Libreddit is down
In 3656596, Libreddit (https://libreddit.unforest.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit is back up in c330181 after 11 minutes.
|
gharchive/issue
| 2023-11-14T23:47:08 |
2025-04-01T06:40:48.867417
|
{
"authors": [
"unforest"
],
"repo": "unforest/uptime",
"url": "https://github.com/unforest/uptime/issues/6860",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
891852796
|
Added nominee: Community Health Toolkit
Added nominee: Community Health Toolkit
This is a duplicate of Medic Mobile, which already exists on our dataset, kindly update or replace the existing entry.
Resolved in commit 4e89da2
|
gharchive/pull-request
| 2021-05-14T11:40:35 |
2025-04-01T06:40:48.897096
|
{
"authors": [
"nathanbaleeta"
],
"repo": "unicef/publicgoods-candidates",
"url": "https://github.com/unicef/publicgoods-candidates/pull/519",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
982933983
|
Finalize TLD Explorations
Discuss with TLD team during the week of Sept 6
Finalize which explorations/prototypes we need to add to our product roadmap OR elaborate upon
First pass completed. Working on the details which will be included in the 2-pager.
|
gharchive/issue
| 2021-08-30T15:44:12 |
2025-04-01T06:40:48.898242
|
{
"authors": [
"amreenp7"
],
"repo": "unicef/publicgoods-roadmap",
"url": "https://github.com/unicef/publicgoods-roadmap/issues/64",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
628805997
|
Use PinoJS for logging, refactor tests, increase test coverage
This PR refactors the way we perform tests and resolves #40 and #45
This PR switches to using Jest as our test runner since it provides a easy to use mocking functionality for imported modules which are extremely useful to perform unit tests.
Add Pino JS dependency (pino-http and pino-http-print)
Add Jest, remove Ava
Switch all tests from Ava testing syntax to Jest
Increases the test coverage for authentication, review and application services/controllers
CI now runs all tests on current and LTS versions of Node simultaneously
This PR doesn't affect the admin or mail services. These have huge tech debt and need be refactored.
Codecov Report
Merging #46 into master will increase coverage by 4.60%.
The diff coverage is 90.00%.
@@ Coverage Diff @@
## master #46 +/- ##
==========================================
+ Coverage 66.79% 71.40% +4.60%
==========================================
Files 52 52
Lines 1045 1119 +74
Branches 107 122 +15
==========================================
+ Hits 698 799 +101
+ Misses 336 319 -17
+ Partials 11 1 -10
Impacted Files
Coverage Δ
src/util/auth/hs_auth.ts
92.98% <90.00%> (+48.33%)
:arrow_up:
src/util/fs/writer.ts
23.07% <0.00%> (-1.93%)
:arrow_down:
src/util/errorHandling/apiError.ts
37.50% <0.00%> (-1.39%)
:arrow_down:
src/controllers/applicationController.ts
89.74% <0.00%> (-0.93%)
:arrow_down:
src/routes/index.ts
100.00% <0.00%> (ø)
src/models/sections.ts
100.00% <0.00%> (ø)
src/models/settings.ts
100.00% <0.00%> (ø)
src/util/cache/cache.ts
100.00% <0.00%> (ø)
src/util/cache/index.ts
100.00% <0.00%> (ø)
src/routes/adminRouter.ts
100.00% <0.00%> (ø)
... and 20 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ea3c8fb...abebe51. Read the comment docs.
Don't merge yet, going to try and improve the auth tests :)
|
gharchive/pull-request
| 2020-06-02T00:07:20 |
2025-04-01T06:40:48.945750
|
{
"authors": [
"codecov-commenter",
"seanjparker"
],
"repo": "unicsmcr/hs_application",
"url": "https://github.com/unicsmcr/hs_application/pull/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1230531205
|
Support for #x... constructions to be able to collect groups of children (2nd stage)
Subj.
Codecov Report
Merging #45 (9d624c1) into master (9f9424f) will decrease coverage by 0.00%.
The diff coverage is 93.22%.
@@ Coverage Diff @@
## master #45 +/- ##
============================================
- Coverage 80.36% 80.36% -0.01%
- Complexity 914 917 +3
============================================
Files 179 179
Lines 4167 4191 +24
Branches 467 467
============================================
+ Hits 3349 3368 +19
- Misses 637 641 +4
- Partials 181 182 +1
Impacted Files
Coverage Δ
src/main/java/org/uast/astgen/base/ListUtils.java
28.57% <50.00%> (ø)
...uast/astgen/codegen/java/ConverterClassFiller.java
95.27% <90.62%> (-3.87%)
:arrow_down:
...g/uast/astgen/codegen/java/ConverterGenerator.java
92.85% <100.00%> (+0.54%)
:arrow_up:
...g/uast/astgen/codegen/java/MatcherClassFiller.java
100.00% <100.00%> (ø)
...org/uast/astgen/codegen/java/MatcherGenerator.java
91.66% <100.00%> (+1.66%)
:arrow_up:
...n/codegen/java/OrdinaryNodeBuilderConstructor.java
90.00% <100.00%> (+0.23%)
:arrow_up:
...ast/astgen/codegen/java/OrdinaryNodeGenerator.java
82.35% <100.00%> (+0.53%)
:arrow_up:
...main/java/org/uast/astgen/interpreter/Creator.java
46.66% <100.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9f9424f...9d624c1. Read the comment docs.
|
gharchive/pull-request
| 2022-05-10T03:10:00 |
2025-04-01T06:40:48.964895
|
{
"authors": [
"codecov-commenter",
"kniazkov"
],
"repo": "unified-ast/ast-generator",
"url": "https://github.com/unified-ast/ast-generator/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
580068616
|
Travis CI & Docker build
Run Spark test suite on all branches
Not including the full integration tests as this takes hours (see upstream
Spark Jenkins build server for examples)
Build a full distributable Spark package on all branches
Push a unipartdigital/spark:<branch-name> Docker image for long lived
branches
branch-2.4, branch-3.0 etc.
Push a unipartdigital/spark:<version> Docker image for tag releases
<version> specified in pom.xml must match the tag name otherwise
the build will fail
Upload the Spark distributable package .tgz as an assets for tag
releases
Closing due to insufficient Travis CI build time limits. CI will be moved to GitHub actions in a PR to follow as upstream already has some GitHub actions configured.
|
gharchive/pull-request
| 2020-03-12T16:41:38 |
2025-04-01T06:40:49.009312
|
{
"authors": [
"SteadBytes"
],
"repo": "unipartdigital/spark",
"url": "https://github.com/unipartdigital/spark/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1363311866
|
Broken link in Abilities doc
Reading https://www.unison-lang.org/learn/fundamentals/abilities/using-abilities-pt1/
But clicking the link at the bottom of the page to go to part two gives me a 404.
The link is https://www.unison-lang.org/learn{#0dtn1o2gj4}
Whereas the link in the popup menu is https://www.unison-lang.org/learn/fundamentals/abilities/using-abilities-pt2
Reported in Slack by Martin G
Thank you! Fixed in prod!
|
gharchive/issue
| 2022-09-06T13:43:49 |
2025-04-01T06:40:49.023612
|
{
"authors": [
"hojberg",
"rlmark"
],
"repo": "unisonweb/website",
"url": "https://github.com/unisonweb/website/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1270764799
|
Inconsistency: The message prefixes Activity and DetectedAnomaly are not in kafka-to-postgresql
Expected Behavior
The message prefixes "Activity" and "detectedAnomaly" should be present in the kafka-to-postgresql prefix.go file and also have their own px<>.go file
Current Behavior
The message prefixes activity and detectedAnomaly are documented on docs and soon to be documented on learn, but do not show up in the prefix.go of kafka-to-postgresql and also dont have a px.go file associated to them.
Relevant log output
No response
Steps to Reproduce
look into the documentation and see activity as documented message topic.
look into kafka-to-postgresql and look for activity and don't find it.
Context (Environment)
I am currenty writing the documentation on learn.umh.app and stumbled on this issues as i was documenting.
Possible Solution
No response
Version
v0.9.x
Anything else?
No response
@JeremyTheocharis These two where neither in mqtt-to-postgresql nor in kafka-to-postgresql, are they actually relevant ?
Can be closed. Will be used in the future by kafka-state-detector
|
gharchive/issue
| 2022-06-14T12:54:14 |
2025-04-01T06:40:49.034115
|
{
"authors": [
"JeremyTheocharis",
"Scarjit",
"Sphingobium"
],
"repo": "united-manufacturing-hub/united-manufacturing-hub",
"url": "https://github.com/united-manufacturing-hub/united-manufacturing-hub/issues/1172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1108354029
|
Data Ingest Interface
Need a way of requesting data to be staged by the ADES within the SPS
There are several interfaces:
'traditional' download and ingest
CNM based ingest from SPS processing
User upload (e.g. an application developer) needs to store some data for future processing
is a known cumulus proess
can be seen here: https://unity-sds.github.io/unity-architecture/#/2. Unity Diagrams/Algorithm Execution/HOME and is similar to existing CNM mechanism
Needs some plumbing hooked up and requires some metadata from the user.
i'd like to close this epic and open up more actionable ones. please see https://docs.google.com/spreadsheets/d/1ZMJh2IjJcpqL4ZX4uCEjkZhPIvjHwF_iERBD0rjAtk0/edit#gid=555044656 which details:
Ingest in place capability for Unity Generated Data Products via CNM
We have other features/requests for uploading data that are uncommitted (e.g. we need to plan when we want to add that functionality).
Close in favor of new actionable epics.
https://github.com/unity-sds/unity-data-services/issues/31
|
gharchive/issue
| 2022-01-19T17:12:05 |
2025-04-01T06:40:49.038953
|
{
"authors": [
"mike-gangl",
"ngachung"
],
"repo": "unity-sds/unity-data-services",
"url": "https://github.com/unity-sds/unity-data-services/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1621625857
|
[New Feature]: Scale the number of worker nodes in the Kubernetes cluster
Describe the feature request
Provide a concrete implementation to the SPS API pre-warm request. For now, the implementation could directly call the K8s functionality to scale the number of replicas in a DaemonSet - see examples here:
https://www.containiq.com/post/kubectl-scale
Later, we need to decouple the details of the specific Kubernetes implementation - perhaps implementing a registration model where a specific ADES implementation registers itself with a WPS-T front-end, and then under a scaling request the WPS-T method invokes ADES specific functionality.
Drew updated the SPS API to increase/decrease the number of nodes of a U-SPS cluster. Works great including reporting errors when trying to set the number of nodes outside of the allowed range.
|
gharchive/issue
| 2023-03-13T14:39:11 |
2025-04-01T06:40:49.041286
|
{
"authors": [
"LucaCinquini"
],
"repo": "unity-sds/unity-sps-prototype",
"url": "https://github.com/unity-sds/unity-sps-prototype/issues/171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1622329061
|
#167 Fix deploy process timeout
Purpose
fix timeout in process deployments caused by slow docker image builds
Proposed Changes
[ADD] docker image that already includes needed python libraries
[ADD] docker pull during deployment of wpst to save image download time on process deployment
Issues
#167
Testing
Tested with requests from my machine demonstrating process deployment takes ~50s reduced from >100s
Tested with smoke test github action runs
Other
PR is dependent on two other PR's:
- change in Unity job image repo TEST
- change in Unity job image repo DEV
Ryan:
The changes in this PR make sense to me - and in the 2 other PRs that this depends on.
Can you please merge into main then I will do a full test deploying from main?
Thanks
|
gharchive/pull-request
| 2023-03-13T21:56:54 |
2025-04-01T06:40:49.045200
|
{
"authors": [
"LucaCinquini",
"ryanhunter-jpl"
],
"repo": "unity-sds/unity-sps-prototype",
"url": "https://github.com/unity-sds/unity-sps-prototype/pull/172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2400479506
|
[Bug]: Upgrade EKS 1.27 AMIs
MCP ticket was sent to me- cc'ed @LucaCinquini on them. A few ec2 instances in various venues are using an AMI that will be deprecated soon (aml2-eks-1-27) . We must upgrade to a new blessed EKS AMI (aml2-eks-1-[28-30]).
venue
ec2 ID
Unity-SBG-Dev
i-0ab695b36a48c98f8
Unity-SBG-Dev
i-0e501c35bee15e723
Unity-SIPS-test
i-0097737f67bb24127
Unity-SIPS-test
i-0983dd05d02bcc622
Unity-SIPS-test
i-03fe3372a3193b5ad
Unity-Venue-Ops
i-06aa40b7bf5eb3360
Unity-Venue-Ops
i-0f0ec6eb6f1ca5be6
Unity-Venue-Dev
i-0012730114f0f3887
Unity-Venue-Dev
i-0202bda805a439cd4
Unity-Venue-Dev
i-0530656128821d57f
Unity-Venue-Dev
i-06b5a9fc49ba5314f
Unity-Venue-Dev
i-0c7e4ae0378e55335
Unity-Venue-Dev
i-0dfbbde73efb49548
Unity-Venue-Dev
i-0f116f943813f00ab
Unity-Venue-Test
i-06b736060d3f6c491
Unity-Venue-Test
i-0a0b9654930e9d58a
Unity-Venue-Test
i-0f4ab8002d597dd61
For Unity-sips-test, we might want to clear that out completely as that seems to be HySDS deployments.
@drewm-jpl : please take down SPS+EKS from Unity-SBG-Dev and Unity-SIPS-test. I can take care of upgrading the deployments on unity-vebnue-dev and unity-venue-test.
unity-venue-dev and unity-venue-test have been upgraded to the latest 1.29 AMI
All SPS instances that contained the obsolete AMI have been destroyed.
The following venues have been upgraded to the latest 1.29 AMI:
o unity-venue-dev
o unity-venue-ops
o unity-SBG-dev
|
gharchive/issue
| 2024-07-10T11:16:23 |
2025-04-01T06:40:49.053316
|
{
"authors": [
"LucaCinquini",
"mike-gangl"
],
"repo": "unity-sds/unity-sps",
"url": "https://github.com/unity-sds/unity-sps/issues/159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
292050056
|
Vertex Color Error - Unity
Using the latest Alembic Importer, I am trying to use an animated Alembic file that has RGBA values stored at every vertex. They are being read in Maya (by recreating color sets), but not Unity. Is it possible to access the vertex color data? Or, could you recommend a direction to make it work? Thanks!
you may encounter this issue #50 (in short: delete AlembicMaterial.cs)
The colors look great and the transparencies are showing up. This may just be a bug, but the Alembic Stream Player is not playing the animation when I add the animation as part of the 'Animator' component. When I try to use it using the 'Animation' component, I've received the error "The AnimationClip 'lowpolly_Clip' used by the Animation component 'lowpolly' must be marked as Legacy."
The animations do work if I use it as part of the Timeline editor.
I am continuing my testing today and tomorrow.
Is it okay if I give you my full notes in about 24 hours?
of course. it will be great help. I will try to solve Animator issue..
Another issue - High transparencies (lower Alpha values) are not showing up. Can you check this on your end? What shader are you using?
Original Photo:
Unity Screenshot:
hmm, I think it is a shader's (or at least outside of AlembicImporter's) issue. I'm using my own debug shader ("Overlay.mat" in the package. it can visualize vertex color or other vertex attributes)
I'm going to sleep for a while..
I just updated package. Animation issue was fixed.
https://github.com/unity3d-jp/AlembicImporter/releases/download/20180201/AlembicForUnity.unitypackage
Thanks - will sleep and get back to you tomorrow
package updated again. fixed crash bug when there are multiple alembic prefabs in the scene..
https://github.com/unity3d-jp/AlembicImporter/releases/download/20180201/AlembicForUnity.unitypackage
I am still hitting the Animation/Animator issue.
Additionally, I've noticed that a lot of settings of my Alembic imports revert to their original setting if I restart Unity. For example, the importer automatically adds a material "Standard" to the models; however, I can't find a way to permanently change that material, besides creating a separate prefab. Additionally, if I want my Animation Clip associated with the Alembic to loop, it unchecks itself upon restart.
If possible, could you add an 'Alpha' or 'Transparency' tab to your Overlay material? I'd love to know whether it's being read perfectly. I like how you had the visualizations for UVs, tangents, and normals.
Thank you for all the help - will continue to write any issues if need be.
Update: wrote a shader that got the transparencies to work, so don't worry about the Alpha tab!
Animation is working at least on my machine. probably you need to reimport Alembic objects?
and certainly imported assets will be reverted when a project is opened or reloaded. FBX prefabs have same behavior but Alembic prefabs are more confusing as it seem not read only. I will add workaround for it..
I just realized what you meant about Animation. certainly animation "clip" was broken.
fixed animation clip. and I figured it out how to make assets read only. package updated.
https://github.com/unity3d-jp/AlembicImporter/releases/download/20180201/AlembicForUnity.unitypackage
Animations work well! The clips also work. Thank you so much! I don't have any other immediate issues - I'll let you know if I ever hit them.
Thanks again!
glad to hear your problem solved :)
I close this issue (milestone reached!), but feel free to update here or add new issue.
https://github.com/unity3d-jp/AlembicImporter/releases/tag/20180205
Hi,
I am developing on a Mac and am using the latest release of this importer.
I see that you used the Overlay.mat material to show the vertex colors, but that material only shows pink on my machine.
Can you tell me what are the proper steps to take to show color? Is there another material/shader I should be using instead?
Thanks
@vutran000 simply assigning Overlay.mat should work (I confirmed it works on my Mac). probably there is a problem that is specific on your machine.
if you need more help, send me your project that can reproduce the issue and Editor.log (https://docs.unity3d.com/Manual/LogFiles.html)
A unity package of my project and the editor log should be attached.
It does appear there is an issue with the Overlay.mat and my machine. The shader claims it is "not supported on this GPU".
OS: macOS Sierra 10.12.6
Unity: 2017.3.1f1
Alembic: 20180222
Thanks again
vutran000_Alembic_Project.zip
Editor.log says Graphics API is OpenGL. switching it to Metal solve the problem.
or change "#pragma target 4.5" in Overlay.shader to "#pragma target 3.0" also solve the problem. (I will include this change in future releases)
Thank you very much for looking into this. Both of your solutions definitely got Overlay.shader to compile correctly, but I am still not able to get my alembic gameobject to show vertex color.
When I set the shader's attribute to "Colors", it just gives me white. The "Normals" attribute does show color, but it is not the correct vertex color.
Also not sure if this is related or not, but the alembic geometry does not animate when I build for MacOS. However, it does whenever I build the exact same scene for Windows.
When I set the shader's attribute to "Colors", it just gives me white.
I'm 99% sure the problem is .abc file, not shader or material.
ask me only when you are sure the problem is in this plugin. and attach the data that can reproduce the issue when you ask. you are wasting my time.
Yes sir. Thanks again for all your help.
Hi,
I dug around a little more and found my problem.
In the <AlembicImporter/Plugin/abci/Importer/aiPolyMesh.cpp> file, on line 275. It appears to only read in the colors if the colorset is named "rgba". By default, Maya will name the exported colorset as "colorSet1".
I renamed my colorset in Maya to "rgba" and the colors showed up correctly in Unity.
I have attached two abc files. They are exactly the same, except for their colorset names. You should be able to reproduce the issue by applying the shader to both.
Thanks,
maya_abc_files.zip
|
gharchive/issue
| 2018-01-26T22:24:15 |
2025-04-01T06:40:49.071423
|
{
"authors": [
"i-saint",
"psparikh",
"vutran000"
],
"repo": "unity3d-jp/AlembicImporter",
"url": "https://github.com/unity3d-jp/AlembicImporter/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2367121850
|
Customizing the auto-import path
Describe the feature
I want to optimize the code structure, not all of it is in untils. Can this be achieved through configuration?
Additional information
[ ] Would you be willing to help implement this feature?
I am not sure if I understand your question. Do you want to add other folders that are auto imported? If yes, this is from the Nuxt docs:
You can also auto-import functions exported from custom folders or third-party packages by configuring the imports section of your nuxt.config file.
Same should apply to nitro.config.ts
First of all, thank you for your reply, but I still have some questions
This is my project structure
I want to automatically import modules in server/services
In nuxt.config.ts, I can set the frontend module path for auto-import via imports, but this is not available for backend modules
// `stores` Frontend module path
// `services` Backend module path
// I tried this way of writing, and it didn't work either. dirs: ['stores', 'server/services'],
imports: {
imports: [
{ name: 'consola', from: 'consola' },
{ name: '*', from: 'echarts', as: 'eCharts' },
{ name: '*', from: 'bpmn-js', as: 'bpmn' },
{ name: '*', from: 'esdk-obs-nodejs', as: 'obs' },
],
dirs: ['stores', 'services'],
},
Do I need to configure it separately in nitro?
Thanks
Hello,Can someone help me?
|
gharchive/issue
| 2024-06-21T19:20:37 |
2025-04-01T06:40:49.111123
|
{
"authors": [
"Ena-Heleneto",
"MickL"
],
"repo": "unjs/nitro",
"url": "https://github.com/unjs/nitro/issues/2554",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2430200616
|
Re-exporting types from another package are not properly marked as type-only export in declaration file.
Environment
Both unbuild@latest (2) and unbuild@rc (3rc7)
NodeJS 20
Reproduction
https://github.com/NamesMT/sencrypt
Describe the bug
I am trying to re-export the class SHash as a type-only export:
export type { SHash } from '@namesmt/shash'
// Also tried this syntax:
export { type SHash } from '@namesmt/shash'
But the index.d.mts declaration result is:
export { SHash } from '@namesmt/shash';
Which is not correct and in the IDE of the end-user it seemed like they could import and use SHash as a class, but it will error at runtime.
Additional context
No response
Logs
No response
Refer to source
So it is a issue of rollup-plugin-dts. Consider the plugin has been in Maintenance Mode, maybe it will be a long time to resolve it...
I have a simular issue. my dts generated a _mergeNamespaces function, which just a "any" wrap 🤔
source:
export * as Rx from 'rxjs'
generated:
import * as rxjs from 'rxjs';
function _mergeNamespaces(n, m) {
m.forEach(function (e) {
e && typeof e !== 'string' && !Array.isArray(e) && Object.keys(e).forEach(function (k) {
if (k !== 'default' && !(k in n)) {
var d = Object.getOwnPropertyDescriptor(e, k);
Object.defineProperty(n, k, d.get ? d : {
enumerable: true,
get: function () { return e[k]; }
});
}
});
});
return Object.freeze(n);
}
var rx = /*#__PURE__*/_mergeNamespaces({
__proto__: null
}, [rxjs]);
export { rx as Rx };
|
gharchive/issue
| 2024-07-25T14:52:54 |
2025-04-01T06:40:49.116423
|
{
"authors": [
"NamesMT",
"cangSDARM",
"s3xysteak"
],
"repo": "unjs/unbuild",
"url": "https://github.com/unjs/unbuild/issues/422",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2009781227
|
Font Override wont work for all controls (probably due to this...)
Current behavior
Font overriding is not working for all controls.
For instance:
If we try to override the "LabelLargeFontFamily", it will work fine for Textblocks.
However, Buttons will not reproduce the same override as expected since the font they use is "LabelLargeFontFamily".
Taking a look at themes code, I found:
In Typography.xaml , the following line:
And in Button.xaml , its font definition:
So, since it is not referencing directly "LabelLargeFontFamily", I suppose it will never reproduce the override.
Expected behavior
Font Override should work everywhere.
How to reproduce it (as minimally and precisely as possible)
Just use Figma Plugin for example or a new app and try to override the referred font.
<FontFamily x:Key="LabelLargeFontFamily">Rock Salt</FontFamily>
<!--The font Rock Salt has been found on Google Fonts at the following url: https://fonts.google.com/specimen/Rock+Salt -->
<!--And its source can be downloaded directly from: https://fonts.gstatic.com/s/rocksalt/v22/MwQ0bhv11fWD6QsAVOZbsEk7hbBWrA.ttf -->
Environment
Nuget Package:
Package Version(s):
Affected platform(s):
[ ] iOS
[ ] Android
[ ] WebAssembly
[ ] UWP
[ ] MacOS
Anything else we need to know?
There's a tradeoff here that we have to figure out, either you have fontfamily specific resource keys for each control or they use the ones coming directly from Typography.xaml.
If we make the control styles use the ones from Typography.xaml instead of having things like OutlineButtonFamilyFamily then we lose the ability to customize fonts for specific controls/styles and are limited to only overriding the font family globally.
Due to how resource aliasing works, you can have Resource B be an alias for Resource A and then expect B to change if A is overridden at a later point in time.
@carldebilly FYI
|
gharchive/issue
| 2023-11-24T14:03:40 |
2025-04-01T06:40:49.157112
|
{
"authors": [
"iurycarlos",
"kazo0"
],
"repo": "unoplatform/Uno.Themes",
"url": "https://github.com/unoplatform/Uno.Themes/issues/1287",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
254198177
|
Email address instead of lan id
Hey guys
Is there a way of implementing this so the user can enter their email rather than their account id?
Looking at using this as part of an ADFS solution where the users login with the email loaded in the AD email field.
Thanks
Our apologies. This is out of scope.
|
gharchive/issue
| 2017-08-31T03:11:27 |
2025-04-01T06:40:49.191328
|
{
"authors": [
"aronimus",
"mariodivece"
],
"repo": "unosquare/passcore",
"url": "https://github.com/unosquare/passcore/issues/80",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2277369663
|
[FEATURE_REQUEST] Full-Parameter Training/FT
Hi, opening this (as suggested by Daniel) just to request full-finetuning capability for Unsloth. Let's see if other people are interested.
Thanks.
+1
Oh it seems like Discord members have shown Unsloth can do 97% "partial" full finetuning, albeit the layernorms are not updated.
|
gharchive/issue
| 2024-05-03T10:18:49 |
2025-04-01T06:40:49.196695
|
{
"authors": [
"danielhanchen",
"erwe324",
"terribilissimo"
],
"repo": "unslothai/unsloth",
"url": "https://github.com/unslothai/unsloth/issues/417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1993012997
|
Request only the required profile data
Update the getProfileData() method to require the requested fields to be provided explicitly. Ensures the app is only requesting the data it needs, and not causing server processing delays without reason.
/gcbrun
|
gharchive/pull-request
| 2023-11-14T15:43:59 |
2025-04-01T06:40:49.197677
|
{
"authors": [
"qrtp"
],
"repo": "unstoppabledomains/domain-profiles",
"url": "https://github.com/unstoppabledomains/domain-profiles/pull/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1428980802
|
Implement backend testing libraries
Same thing as #9 but for the backend
https://jestjs.io/ We are using this for backend testing.
We don't actually have anything we actually want to test in the backend, so just to be explicit, completion conditions are basically:
Installing the library
Setting up a test for below endpoint, so checking response value + code. It's not something we actually want to test, but good as sanity check that lib is installed
Updated steps:
Installing jest
Add a script to tces/package.json called "test" that runs tests in the server.
You make a test where you send a fetch request to the /api endpoint, and then verify that you get the expected response of "Hello from server!" , and the expected status code of 200.
|
gharchive/issue
| 2022-10-30T18:57:19 |
2025-04-01T06:40:49.211444
|
{
"authors": [
"edwardhuahan",
"kenneth-miura"
],
"repo": "uoftblueprint/tces",
"url": "https://github.com/uoftblueprint/tces/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2393863184
|
CMAKE issue IMPORTED_LOCATION not set
Hello,
I'm trying to build this on Windows. I'm using gitbash and following the gitbash instructions as recommended. When I do "cmake .." I get many errors like this:"
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
I get that error repeated for _zlib, _libpng, and _glew. Files are still generated but I can't build because I'm missing pangolin_export.h I think. When I try to build with "cmake --build ." it's then that I get a bunch of errors like this:
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src\_pangolin.vcxproj]
I'm definitely out of my depth trying to build code like this so I'm sure I'm making a mistake on my end, any advice? Thank you.
cmake log
$ cmake ..
CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22635.
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
CMake Warning (dev) in CMakeModules/FindMediaFoundation.cmake:
A logical block opening on the line
C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/CMakeModules/FindMediaFoundation.cmake:13 (IF)
closes on the line
C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/CMakeModules/FindMediaFoundation.cmake:15 (ENDIF)
with mis-matching arguments.
Call Stack (most recent call first):
src/CMakeLists.txt:427 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- MediaFoundation Found and Enabled
-- libpng Found and Enabled
-- libjpeg Found and Enabled
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
CMake Warning at python/CMakeLists.txt:3 (find_package):
By not providing "Findpybind11.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "pybind11",
but CMake did not find one.
Could not find a package configuration file provided by "pybind11" with any
of the following names:
pybind11Config.cmake
pybind11-config.cmake
Add the installation prefix of "pybind11" to CMAKE_PREFIX_PATH or set
"pybind11_DIR" to a directory containing one of the above files. If
"pybind11" provides a separate development package or SDK, be sure it has
been installed.
CMake Deprecation Warning at external/pybind11/tools/pybind11Tools.cmake:8 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
Call Stack (most recent call first):
python/CMakeLists.txt:9 (include)
CMake Warning (dev) at external/pybind11/tools/FindPythonLibsNew.cmake:60 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
external/pybind11/tools/pybind11Tools.cmake:16 (find_package)
python/CMakeLists.txt:9 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done (1.4s)
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_glew" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_libpng" configuration
"MinSizeRel".
CMake Error in external/CMakeLists.txt:
IMPORTED_LOCATION not set for imported target "_zlib" configuration
"MinSizeRel".
-- Generating done (2.5s)
CMake Generate step failed. Build files cannot be regenerated correctly.
cmake --build log
$ cmake --build .
MSBuild version 17.7.2+d6990bcfa for .NET Framework
1>Checking Build System
Performing update step for '__glew'
No patch step for '__glew'
Performing configure step for '__glew'
CMake Deprecation Warning at CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22635.
-- Configuring done (0.0s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/src/__glew-build
Performing build step for '__glew'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
1>Checking Build System
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/src/__glew/CMakeLists.txt
libglew_static.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\lib\Debug\glewd.lib
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/src/__glew/CMakeLists.txt
glewinfo.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\bin\Debug\glewinfo.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/src/__glew/CMakeLists.txt
libglew_shared.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\bin\Debug\glewd.dll
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/src/__glew/CMakeLists.txt
visualinfo.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\bin\Debug\visualinfo.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/src/__glew/CMakeLists.txt
Performing install step for '__glew'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
libglew_static.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\lib\Debug\glewd.lib
glewinfo.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\bin\Debug\glewinfo.exe
libglew_shared.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\bin\Debug\glewd.dll
visualinfo.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\glew\src__glew-build\bin\Debug\visualinfo.exe
1>
-- Install configuration: "Debug"
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/lib/glewd.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/lib/libglew_sharedd.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/lib/glewd.dll
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/include/GL/glew.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/include/GL/glxew.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/include/GL/wglew.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/bin/glewinfo.exe
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/glew/bin/visualinfo.exe
Completed '__glew'
Performing update step for '__libjpeg'
No patch step for '__libjpeg'
Performing configure step for '__libjpeg'
CMake Warning (dev) at CMakeLists.txt:7 (project):
cmake_minimum_required() should be called prior to this top-level project()
call. Please see the cmake-commands(7) manual for usage documentation of
both commands.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22635.
CMake Deprecation Warning at CMakeLists.txt:8 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
DIST_NAME: libjpeg
DIST_VERSION: 8.4.0
DIST_LICENSE: jpeg license
DIST_AUTHOR: Tom Lane, Guido Vollbeding, Philip Gladstone, Bill Allombert, Jim Boucher, Lee Crocker, Bob Friesenhahn, Ben Jackson, Julian Minguillon, Luis Ortiz, Geo
rge Phillips, Davide Rossi, Ge
DIST_MAINTAINER: Peter Kapec
DIST_URL: http://www.ijg.org/
DIST_DESC: Independent JPEG Group
DIST_DEPENDS:
CMake Warning (dev) at CMakeLists.txt:56 (get_target_property):
Policy CMP0026 is not set: Disallow use of the LOCATION target property.
Run "cmake --help-policy CMP0026" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The LOCATION property should not be read from target "djpeg". Use the
target name directly with add_custom_command, or use the generator
expression $<TARGET_FILE>, as appropriate.
Call Stack (most recent call first):
CMakeLists.txt:61 (mytest)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:56 (get_target_property):
Policy CMP0026 is not set: Disallow use of the LOCATION target property.
Run "cmake --help-policy CMP0026" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The LOCATION property should not be read from target "djpeg". Use the
target name directly with add_custom_command, or use the generator
expression $<TARGET_FILE>, as appropriate.
Call Stack (most recent call first):
CMakeLists.txt:63 (mytest)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done (0.1s)
CMake Warning (dev) at CMakeLists.txt:56 (get_target_property):
Policy CMP0026 is not set: Disallow use of the LOCATION target property.
Run "cmake --help-policy CMP0026" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The LOCATION property should not be read from target "cjpeg". Use the
target name directly with add_custom_command, or use the generator
expression $<TARGET_FILE>, as appropriate.
Call Stack (most recent call first):
CMakeLists.txt:65 (mytest)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:56 (get_target_property):
Policy CMP0026 is not set: Disallow use of the LOCATION target property.
Run "cmake --help-policy CMP0026" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The LOCATION property should not be read from target "djpeg". Use the
target name directly with add_custom_command, or use the generator
expression $<TARGET_FILE>, as appropriate.
Call Stack (most recent call first):
CMakeLists.txt:67 (mytest)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:56 (get_target_property):
Policy CMP0026 is not set: Disallow use of the LOCATION target property.
Run "cmake --help-policy CMP0026" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The LOCATION property should not be read from target "cjpeg". Use the
target name directly with add_custom_command, or use the generator
expression $<TARGET_FILE>, as appropriate.
Call Stack (most recent call first):
CMakeLists.txt:69 (mytest)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:56 (get_target_property):
Policy CMP0026 is not set: Disallow use of the LOCATION target property.
Run "cmake --help-policy CMP0026" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The LOCATION property should not be read from target "jpegtran". Use the
target name directly with add_custom_command, or use the generator
expression $<TARGET_FILE>, as appropriate.
Call Stack (most recent call first):
CMakeLists.txt:71 (mytest)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Generating done (0.3s)
-- Build files have been written to: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg-build
Performing build step for '__libjpeg'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
1>Checking Build System
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
jpeg.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\jpeg.lib
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
cjpeg.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\cjpeg.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
djpeg.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\djpeg.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
jpegtran.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\jpegtran.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
rdjpgcom.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\rdjpgcom.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
wrjpgcom.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\wrjpgcom.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/src/__libjpeg/CMakeLists.txt
Performing install step for '__libjpeg'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
jpeg.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\jpeg.lib
cjpeg.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\cjpeg.exe
djpeg.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\djpeg.exe
jpegtran.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\jpegtran.exe
rdjpgcom.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\rdjpgcom.exe
wrjpgcom.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libjpeg\src__libjpeg-build\Debug\wrjpgcom.exe
1>
-- Install configuration: "Debug"
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/bin/cjpeg.exe
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/bin/djpeg.exe
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/bin/jpegtran.exe
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/bin/rdjpgcom.exe
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/bin/wrjpgcom.exe
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/lib/jpeg.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/include/jerror.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/include/jmorecfg.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/include/jpeglib.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/include/jconfig.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/README
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/install.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/usage.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/wizard.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/example.c
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/libjpeg.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/structure.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/coderules.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/filelist.txt
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libjpeg/share/libjpeg/doc/change.log
Completed '__libjpeg'
Performing update step for '__zlib'
No patch step for '__zlib'
Performing configure step for '__zlib'
CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22635.
-- Configuring done (0.1s)
-- Generating done (0.2s)
-- Build files have been written to: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/src/__zlib-build
Performing build step for '__zlib'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
1>Checking Build System
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/src/__zlib/CMakeLists.txt
zlib.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\zlibd.dll
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/src/__zlib/CMakeLists.txt
example.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\example.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/src/__zlib/CMakeLists.txt
minigzip.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\minigzip.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/src/__zlib/CMakeLists.txt
zlibstatic.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\zlibstaticd.lib
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/src/__zlib/CMakeLists.txt
Performing install step for '__zlib'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
zlib.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\zlibd.dll
example.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\example.exe
minigzip.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\minigzip.exe
zlibstatic.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\zlib\src__zlib-build\Debug\zlibstaticd.lib
1>
-- Install configuration: "Debug"
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/lib/zlibd.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/bin/zlibd.dll
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/lib/zlibstaticd.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/include/zconf.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/include/zlib.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/share/man/man3/zlib.3
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/zlib/share/pkgconfig/zlib.pc
Completed '__zlib'
Performing update step for '__libpng'
No patch step for '__libpng'
Performing configure step for '__libpng'
CMake Deprecation Warning at CMakeLists.txt:9 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22635.
-- Configuring done (0.0s)
-- Generating done (0.3s)
-- Build files have been written to: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng-build
Performing build step for '__libpng'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
1>Checking Build System
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng/CMakeLists.txt
png16.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\libpng16d.dll
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng/CMakeLists.txt
png16_static.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\libpng16_staticd.lib
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng/CMakeLists.txt
pngstest.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\pngstest.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng/CMakeLists.txt
pngtest.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\pngtest.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng/CMakeLists.txt
pngvalid.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\pngvalid.exe
Building Custom Rule C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/src/__libpng/CMakeLists.txt
Performing install step for '__libpng'
MSBuild version 17.7.2+d6990bcfa for .NET Framework
png16.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\libpng16d.dll
png16_static.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\libpng16_staticd.lib
pngstest.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\pngstest.exe
pngtest.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\pngtest.exe
pngvalid.vcxproj -> C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\external\libpng\src__libpng-build\Debug\pngvalid.exe
1>
-- Install configuration: "Debug"
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/lib/libpng16d.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/bin/libpng16d.dll
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/lib/libpng16_staticd.lib
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/include/png.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/include/pngconf.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/include/pnglibconf.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/include/libpng16/png.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/include/libpng16/pngconf.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/include/libpng16/pnglibconf.h
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/share/man/man3/libpng.3
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/share/man/man3/libpngpf.3
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/share/man/man5/png.5
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/lib/libpng/libpng16.cmake
-- Up-to-date: C:/Users/Blockheadsuper/OneDrive/Documents/Code/pangolin/build/external/libpng/lib/libpng/libpng16-debug.cmake
Completed '__libpng'
cl : command line warning D9002: ignoring unknown option '-fPIC' [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
file_extension.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
file_utils.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
sigstate.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
threadedfilebuf.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
timer.cpp
uri.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_exr.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_jpg.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_pango.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_png.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_ppm.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_raw.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_tga.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_io_zstd.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
pixel_format.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
packet.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
packetstream.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
packetstream_reader.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
packetstream_writer.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
Generating Code...
Compiling...
playback_session.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
glchar.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
gldraw.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
glfont.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
glpangoglu.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
gltext.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
gltexturecache.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
display.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
image_view.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
opengl_render_state.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
view.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
viewport.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
handler.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
handler_glbuffer.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
handler_image.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
datalog.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
plotter.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
input_record_repeat.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
vars.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
widgets.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
Generating Code...
Compiling...
stream_encoder_factory.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
video.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
video_input.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
video_interface_factory.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
video_output.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
video_output_interface_factory.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
test.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
images.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
images_out.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
split.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
pvn.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
pango.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
pango_video_output.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
debayer.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
shift.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
mirror.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
unpack.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
join.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
merge.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
json.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
Generating Code...
Compiling...
thread.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
video_viewer.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
display_win.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
uvc_mediafoundation.cpp
C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\include\pangolin/platform.h(45,13): fatal error C1083: Cannot open include file: 'pangolin/pangolin_export.h': No such
file or directory [C:\Users\Blockheadsuper\OneDrive\Documents\Code\pangolin\build\src_pangolin.vcxproj]
Generating Code...
@Blockheadsuper did u get this??
btw, i've got an error while running setup.py -
|
gharchive/issue
| 2024-07-07T02:28:01 |
2025-04-01T06:40:49.427195
|
{
"authors": [
"Blockheadsuper",
"harshitsinghcode"
],
"repo": "uoip/pangolin",
"url": "https://github.com/uoip/pangolin/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2020652423
|
Add FrontdoorFirewallPolicy and FrontdoorSecurityPolicy to cdn group
Description of your changes
Fixes #599
I have:
[x] Run make reviewable test to ensure this PR is ready for review.
How has this code been tested
I've created the resources on a local kind cluster on my tenant. Resources have been created successfully.
➜ k get managed
NAME READY SYNCED EXTERNAL-NAME AGE
resourcegroup.azure.upbound.io/frontdoorfirewallpolicy True True frontdoorfirewallpolicy 34m
resourcegroup.azure.upbound.io/frontdoorsecuritypolicy True True frontdoorsecuritypolicy 38m
NAME READY SYNCED EXTERNAL-NAME AGE
frontdoorcustomdomain.cdn.azure.upbound.io/frontdoorsecuritypolicy True True frontdoorsecuritypolicy 7m19s
NAME READY SYNCED EXTERNAL-NAME AGE
frontdoorfirewallpolicy.cdn.azure.upbound.io/frontdoorfirewallpolicy True True frontdoorfirewallpolicy 34m
frontdoorfirewallpolicy.cdn.azure.upbound.io/frontdoorsecuritypolicy True True frontdoorsecuritypolicy 38m
NAME READY SYNCED EXTERNAL-NAME AGE
frontdoorprofile.cdn.azure.upbound.io/frontdoorfirewallpolicy True True frontdoorfirewallpolicy 34m
frontdoorprofile.cdn.azure.upbound.io/frontdoorsecuritypolicy True True frontdoorsecuritypolicy 37m
NAME READY SYNCED EXTERNAL-NAME AGE
frontdoorsecuritypolicy.cdn.azure.upbound.io/frontdoorsecuritypolicy True True frontdoorsecuritypolicy 38m
NAME READY SYNCED EXTERNAL-NAME AGE
dnszone.network.azure.upbound.io/frontdoorsecuritypolicy True True upbound-example.com 5m11s
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Mikel Landa seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
/test-examples="examples/cdn/frontdoorsecuritypolicy.yaml,examples/cdn/frontdoorfirewallpolicy.yaml"
|
gharchive/pull-request
| 2023-12-01T11:10:20 |
2025-04-01T06:40:49.439978
|
{
"authors": [
"Mikel-Landa",
"Upbound-CLA"
],
"repo": "upbound/provider-azure",
"url": "https://github.com/upbound/provider-azure/pull/600",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
923169693
|
[release-1.2] Update Crossplane to v1.2.3
Description of your changes
Updates Crossplane version to v1.2.3.
Signed-off-by: hasheddan georgedanielmangum@gmail.com
See release notes for more information: https://github.com/crossplane/crossplane/releases/tag/v1.2.3
I have:
[x] Read and followed Upbound's contribution process.
[x] Run make reviewable to ensure this PR is ready for review.
[ ] Added backport release-x.y labels to auto-backport this PR, as appropriate.
How has this code been tested
make reviewable
@muvaf definitely :+1:
|
gharchive/pull-request
| 2021-06-16T22:25:10 |
2025-04-01T06:40:49.444075
|
{
"authors": [
"hasheddan"
],
"repo": "upbound/universal-crossplane",
"url": "https://github.com/upbound/universal-crossplane/pull/153",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2761628620
|
🛑 Main Website (NEW) is down
In 37808be, Main Website (NEW) (https://prod.upmin.edu.ph) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Main Website (NEW) is back up in 017e4b2 after 19 minutes.
|
gharchive/issue
| 2024-12-28T08:26:14 |
2025-04-01T06:40:49.447308
|
{
"authors": [
"upmin-dev"
],
"repo": "upmin-dev/UP-time",
"url": "https://github.com/upmin-dev/UP-time/issues/1276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2648449506
|
🛑 BA English Creative Writing Website is down
In 9849fa5, BA English Creative Writing Website (https://baecw.upmin.edu.ph) was down:
HTTP code: 0
Response time: 0 ms
Resolved: BA English Creative Writing Website is back up in 2d9eb1b after 15 minutes.
|
gharchive/issue
| 2024-11-11T07:34:11 |
2025-04-01T06:40:49.449739
|
{
"authors": [
"upmin-dev"
],
"repo": "upmin-dev/UP-time",
"url": "https://github.com/upmin-dev/UP-time/issues/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2648462353
|
🛑 KOHA Library System / Catalog is down
In 658ba47, KOHA Library System / Catalog (https://koha.upmin.edu.ph) was down:
HTTP code: 0
Response time: 0 ms
Resolved: KOHA Library System / Catalog is back up in e358813 after 9 minutes.
|
gharchive/issue
| 2024-11-11T07:41:06 |
2025-04-01T06:40:49.452146
|
{
"authors": [
"upmin-dev"
],
"repo": "upmin-dev/UP-time",
"url": "https://github.com/upmin-dev/UP-time/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2392647163
|
👷 Renovate: Update peaceiris/actions-gh-pages action to v4
Taken from my repo's Renovate PR.
Thanks! No breaking changes we need to be aware of, right?
I let all GH actions run on my PR-branch and all ran fine 👍🏼
Ups, letting setup-CI run will change this back - so I had to revert this and force-push.
+ - uses: peaceiris/actions-gh-pages@v3.7.3
- - uses: peaceiris/actions-gh-pages@v4
Thanks! Can you please make the PR here and I'll merge it? https://github.com/upptime/uptime-monitor/blob/2735e1b2bb69e63b987844dd8db29a0315e66b33/src/helpers/workflows.ts#L172
Here we go 👍🏼 https://github.com/upptime/uptime-monitor/pull/252
|
gharchive/pull-request
| 2024-07-05T13:47:58 |
2025-04-01T06:40:49.460705
|
{
"authors": [
"AnandChowdhary",
"thomasmerz"
],
"repo": "upptime/upptime",
"url": "https://github.com/upptime/upptime/pull/982",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
105270969
|
Support Visual Studio 2015
In README, there is the description that Prig supports Visual Studio 2013 Express for Windows Desktop or more, but we no longer get it. So, Prig should support Visual Studio 2015.
I corrected this issue in the following commits:
v1: https://github.com/urasandesu/Prig/commit/76f6e22fa91c06440bd589392699d975fe030997
v2: https://github.com/urasandesu/Prig/commit/0dab93ae2c95a744a8efc8718f648e424002692e
|
gharchive/issue
| 2015-09-07T21:58:18 |
2025-04-01T06:40:49.483406
|
{
"authors": [
"urasandesu"
],
"repo": "urasandesu/Prig",
"url": "https://github.com/urasandesu/Prig/issues/45",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1473827280
|
Perfectionist's Checklist
Here are some things that I found internally unsatisfying in v2.
[x] I want to turn off help command
[ ] I want App.Name to be AppName to make it clear it is only used in help formatting
[x] No need to say that default for help is false (--help, -h show help (default: false))
#1633
[x] Don't say that my program accepts arguments when it doesn't (sub add [command options] [arguments...])
[x] Don't say subcommand accepts options when it doesn't (sub add [command options] [arguments...])
[x] Fail with an error if user supplies arguments that are not supported
#1797
To be expanded..
[ ] Allow Name: "--password" and Name: "-p, --password" to simplify grep for looking where options are defined
[x] Make subcommand -h take priority over missing required flag on level above
What problem does this solve?
Reaching inner peace.
@abitrolly I agree with all of these points. In fact these have been raised before see
https://github.com/urfave/cli/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fv3
However at this point v2 is in maintenance mode so no new features/functionality. Most if not all of your issues are slated to be fixed in v3. Why v3 you say ? We wanted to take the most requested features and create a new version of the software . As part of this we've rewritten the flag code to use generics so that feature addition is localized and more manageable than it is currently. For example see the recent changes for persistent flags in v3. It required an update to just 4 files vs currently we would have to update almost 15-20 files. So code maintenance is better . That being said are you up for these changes coming to v3?
I don't understand generics, and I don't know how much time it will be needed for me to get it, but when v3 is ready to be used, I can easily go through this checklist to see if it works for me.
@dearchap renamed feature request template to point to v3 https://github.com/urfave/cli/pull/1613
Thanks @abitrolly
@abitrolly I like the idea of having a single Name: "-p, --password" rather than aliases. I think v1 had support for this. @urfave/cli WDYT ?
@abitrolly I've discussed with @meatballhat offline and we are going to keep the same Name/Aliases structure and not go back to "-p, --password" style. I think we've covered all the items issues listed in this issue. Closing for now
@dearchap thanks for taking this in the works. )
|
gharchive/issue
| 2022-12-03T07:37:49 |
2025-04-01T06:40:49.525415
|
{
"authors": [
"abitrolly",
"dearchap"
],
"repo": "urfave/cli",
"url": "https://github.com/urfave/cli/issues/1612",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1643059829
|
combining BoolFlags into one using v2 does not work as expected
I have a code that can get arguments of -f/--foo or -b/--bar. Argument parsing is done via v2 package. I can run my program like go run . -f -b but not like go run . -fb Is there a way that I make it work with go run . -fb? If it is not possible, what go module can make this possible?
code:
package main
import (
"fmt"
"log"
"os"
"github.com/urfave/cli/v2"
)
func main() {
var foo_count, bar_count bool
app := &cli.App{
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "foo",
Usage: "Foo",
Aliases: []string{"f"},
Destination: &foo_count,
},
&cli.BoolFlag{
Name: "bar",
Usage: "Bar",
Aliases: []string{"b"},
Destination: &bar_count,
},
},
Action: func(cCtx *cli.Context) error {
fmt.Println("foo_count", foo_count)
fmt.Println("bar_count", bar_count)
return nil
},
}
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
}
}
tests:
$ go run . -f
foo_count true
bar_count false
$ go run . -b
foo_count false
bar_count true
$ go run . -bf
Incorrect Usage: flag provided but not defined: -bf
NAME:
main - A new cli application
USAGE:
main [global options] command [command options] [arguments...]
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--foo, -f Foo (default: false)
--bar, -b Bar (default: false)
--help, -h show help
2023/03/25 15:54:00 flag provided but not defined: -bf
exit status 1
@milaniez You can set the UseShortOptionHandling flag in App to enable this behaviour.
https://github.com/urfave/cli/blob/v2-maint/app.go#L115
|
gharchive/issue
| 2023-03-28T01:32:00 |
2025-04-01T06:40:49.528742
|
{
"authors": [
"dearchap",
"milaniez"
],
"repo": "urfave/cli",
"url": "https://github.com/urfave/cli/issues/1713",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1350947279
|
Add configurable Base to int, uint and uint64 flags
What type of PR is this?
feature
What this PR does / why we need it:
This allows users to configure the base for integer parsing.
Which issue(s) this PR fixes:
Fixes #1462
Special notes for your reviewer:
I didn't find the location where I could add godoc for the Base field and its meaning.
Is this not possible?
I didn't yet add the same functionality for the IntSlice and UInt64Slice flags. I couldn't find a suitable test case where such a test fits. Do you think it's worth having the base configurable for the slice flags as well? If yes, which test case should I update?
There seem to be several lookup functions, e.g. https://github.com/urfave/cli/blob/77a5feffee931936e8fb1a7aabf01fb63b1b3eb6/flag_int.go#L83-L93
I don't have access to the base field in there resp. the flagset. Is this going to be a problem?
Testing
make test
Release Notes
(REQUIRED)
Allow to configure the base for integer parsing in int, uint and uint64 flags
LGTM. do you think we need to specify somewhere in flag help that the base is X ?
A bit of godoc would certainly help the developers that are setting up the flags, otherwise they'd have to read the source code to understand its meaning and the accepted values. Might also be enough to mention it in the docs, like the timestamp flag (https://cli.urfave.org/v2/#timestamp-flag). Maybe the flag-spec yaml could receive an additional godoc property to render it in https://github.com/urfave/cli/blob/main/cmd/urfave-cli-genflags/generated.gotmpl#L26 ?
Whether the base should also be mentioned in --help (if that was your question) I don't have strong opinions there.
|
gharchive/pull-request
| 2022-08-25T13:57:57 |
2025-04-01T06:40:49.535023
|
{
"authors": [
"ccremer"
],
"repo": "urfave/cli",
"url": "https://github.com/urfave/cli/pull/1464",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1030914549
|
Problem with sqlalchemy .
async def search_offertypes(order: bool = False, parameter: OffertypeList = Depends(),
db: Session = Depends(get_db)) -> Any:
return paginate(db.query(OfferType))
output:
TypeError: object of type 'Query' has no len()
How route registration looks like?
I cann't see how search_offertypes is registred at offer_router
import time
from fastapi import APIRouter, Depends
from typing import Any, List
from sqlalchemy.orm import Session
from offertype.core.utils import get_db
from offertype.models import OfferType
from fastapi.responses import JSONResponse
from offertype.schemas import OffertypeCreate, OffertypeList, OffertypeUpdate
from fastapi.requests import Request
from fastapi.exception_handlers import HTTPException
from fastapi_pagination.ext.sqlalchemy import paginate
offer_router = APIRouter()
@offer_router.post("")
async def create_offertype(offertype: OffertypeCreate, request: Request, db: Session = Depends(get_db)):
try:
offer = OfferType(**offertype.dict())
offer.dt_create = time.ctime()
db.add(offer)
db.commit()
db.refresh(offer)
return offer
except Exception as e:
print(e)
return JSONResponse(status_code=403, content={'Error': 'Não foi possível cadastrar a offertype'})
@offer_router.post("/teste/")
def search_offertypes(db: Session = Depends(get_db)) -> Any:
return paginate(db.query(OfferType))
Try update code to (bassically you should set response_model):
from fastapi_pagination import Page
@offer_router.get("/teste/", response_model=Page[Any])
def search_offertypes(db: Session = Depends(get_db)) -> Any:
return paginate(db.query(OfferType))
I i use any Pydantic models inside Page, this happen
No problem, happy you that you resolve your issue)
|
gharchive/issue
| 2021-10-20T02:40:05 |
2025-04-01T06:40:49.548324
|
{
"authors": [
"ScrimForever",
"uriyyo"
],
"repo": "uriyyo/fastapi-pagination",
"url": "https://github.com/uriyyo/fastapi-pagination/issues/218",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
644046294
|
Add carousel to display multiple testimonials
Links
Issue link: https://github.com/usdigitalresponse/neighbor-express/issues/143
GIF/Screenshots:
Changes:
Add react-responsive-carousel dependency to show multiple testimonials
Update text color from blue to white.
Hmm, what about making this its own component so that we can do single quotes and/or a carousel?
Hmm, what about making this its own component so that we can do single quotes and/or a carousel?
Done! I'm keeping the conditional in QuotesCarousel though (UI would look weird if a carousel is shown for a single quote). WDYT?
Updated screenshots:
Single quote
Multiple quotes
|
gharchive/pull-request
| 2020-06-23T18:13:59 |
2025-04-01T06:40:49.591193
|
{
"authors": [
"alienngator",
"keithk"
],
"repo": "usdigitalresponse/neighbor-express",
"url": "https://github.com/usdigitalresponse/neighbor-express/pull/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1003588248
|
Open external links in new tabs
allow contact page mailto link to open in new tab
allow about page mailto link to open in new tab
comment against opening external links in same tab
Closes #718
LGTM. (But it would be nice if Trusswork's <Link> could do this).
Yeah totally!
|
gharchive/pull-request
| 2021-09-22T01:00:00 |
2025-04-01T06:40:49.593970
|
{
"authors": [
"vim-usds"
],
"repo": "usds/justice40-tool",
"url": "https://github.com/usds/justice40-tool/pull/731",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1570859341
|
RFC: Supporting for multi level dictionary blocks
A Bru file is made up of blocks.
There are two types of blocks
Dictionary Blocks
Text Blocks
Dictionary Block
A dictionary block contains of a set of key value pairs. Currently we support only 1 level.
Example:
headers {
content-type: application/json
Authorization: Bearer 123
}
Text Block
A text block is a set of lines
Example:
body:json {
{
"hello": "world"
}
}
test {
function onResponse(request, response) {
expect(response.status).to.equal(200);
}
}
Nested Dictionary Blocks ?
I am thinking should we support nested dictionary blocks. For instance, its a lot more elegant to describe a request details in a nested manner. A good analogy is how sass made css easier to write and maintain.
The user can choose to write multi line dictionaries or write nested dictionaries.
Without nested dictionaries
post {
url: "https://$host/api/nsk/v1/trip/info/legs"
}
headers {
authorization: Bearer $token
}
body {
"endDate": "$returnDate",
"beginDate": "$departDate",
"originStations": [ "$orig" ],
"destinationStations": [ "$dest" ]
}
vars {
legKey: $res.data.data[0].journeys[0].segments[0].legs[0].legKey
}
assert {
$res.status: 200
}
With nested dictionaries
post {
url: "https://$host/api/nsk/v1/trip/info/legs"
headers {
authorization: Bearer $token
}
body {
"endDate": "$returnDate",
"beginDate": "$departDate",
"originStations": [ "$orig" ],
"destinationStations": [ "$dest" ]
}
}
vars {
legKey: $res.data.data[0].journeys[0].segments[0].legs[0].legKey
}
assert {
$res.status: 200
}
Closing this issue.
If you are interested in Bru Lang improvements, checkout: https://www.brulang.org/
We plan to introduce support for yaml as a file storage option instead of .bru files. We'll reassess whether to deprecate the .bru format based on user feedback after the release that includes yaml support.
|
gharchive/issue
| 2023-02-04T09:48:18 |
2025-04-01T06:40:49.599254
|
{
"authors": [
"helloanoop"
],
"repo": "usebruno/bruno",
"url": "https://github.com/usebruno/bruno/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1178350393
|
Error in build process with Paths
I'm executing "npm run build" but i have the next error:
This is the "getStaticProps" method.
This is the error:
This error just appears when I export the "table" props of next-rosetta
Make sure to have your i18n directory outside /pages
Also check if the path at await import(here) is valid and it importing the expected file.
|
gharchive/issue
| 2022-03-23T16:17:08 |
2025-04-01T06:40:49.605003
|
{
"authors": [
"lopezjurip",
"sebastiansandoval27"
],
"repo": "useflyyer/next-rosetta",
"url": "https://github.com/useflyyer/next-rosetta/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
580236303
|
read api and ssh endpoints from .lagoon.yml file
When working on a project that uses other-than default endpoints, it would be good for the CLI to read these from the project's own .lagoon.yml
Possible to do this, but would involve a lot of extra work to make it happen.
Is it possible to utilise the following that is in lagoon-cli v0.8.+ ?
export LAGOONCONFIG=/path/to/mycluster.yaml
where /path/to/mycluster.yaml could be
current: mycluster
default: mycluster
lagoons:
mycluster:
graphql: https://api.lagoon.amazeeio.cloud/graphql
hostname: ssh.lagoon.amazeeio.cloud
kibana: https://logs-db-ui-lagoon-master.ch.amazee.io/
port: 32222
ui: https://ui-lagoon-master.ch.amazee.io
Having that envvar defined will mean any commands run will execute against the cluster defined in the config that is being given.
we're looking to place less reliance on the api/ssh documented in the .lagoon.yml - closing.
|
gharchive/issue
| 2020-03-12T21:30:02 |
2025-04-01T06:40:49.607297
|
{
"authors": [
"shreddedbacon",
"tobybellwood",
"twardnw"
],
"repo": "uselagoon/lagoon-cli",
"url": "https://github.com/uselagoon/lagoon-cli/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1432411289
|
Tag content needs to be restricted
Describe the bug
#tag test [URL](https://demo.usememos.com#title), contains white space.
The above will generate two tags:
tag
title),
The tag regexp is just restricted to non-blank characters and one space only:
https://github.com/usememos/memos/blob/5bdf15aecec545ebfa1205d7504dad6e1eb9e798/server/tag.go#L15
So I think there is a need for more restrictions on tag content, e.g. using (?!pattern) to exclude some special characters.
Steps to reproduce
omitted
Expected behavior
omitted
Screenshots or additional context
No response
I think special characters in tags should be allowed.
And for this issue, I think a solution is to change the regex of a tag to [space]#tag[space]. Or maybe someone else has a better solution?
It would be better to provide details for reproduction.
It still hasn't been fixed.
|
gharchive/issue
| 2022-11-02T03:16:06 |
2025-04-01T06:40:49.611335
|
{
"authors": [
"Mahoo12138",
"Zeng1998"
],
"repo": "usememos/memos",
"url": "https://github.com/usememos/memos/issues/401",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1230825851
|
⚠️ Buyfrom.io has degraded performance
In edb6353, Buyfrom.io ($BUYFROM_IO_URL) experienced degraded performance:
HTTP code: 200
Response time: 12160 ms
Resolved: Buyfrom.io performance has improved in 31b2e2d.
|
gharchive/issue
| 2022-05-10T08:47:50 |
2025-04-01T06:40:49.632139
|
{
"authors": [
"rbudiharso"
],
"repo": "usetada/status-page",
"url": "https://github.com/usetada/status-page/issues/238",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1740031982
|
Update sbt-scalafix to 0.11.0
Updates ch.epfl.scala:sbt-scalafix from 0.10.4 to 0.11.0.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "ch.epfl.scala", artifactId = "sbt-scalafix" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "ch.epfl.scala", artifactId = "sbt-scalafix" }
}]
labels: sbt-plugin-update, early-semver-major, semver-spec-minor, commit-count:1
Codecov Report
Patch and project coverage have no change.
Comparison is base (4f652ab) 50.58% compared to head (12fa757) 50.58%.
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
Additional details and impacted files
@@ Coverage Diff @@
## develop #410 +/- ##
========================================
Coverage 50.58% 50.58%
========================================
Files 5 5
Lines 85 85
Branches 2 2
========================================
Hits 43 43
Misses 42 42
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2023-06-04T00:59:15 |
2025-04-01T06:40:49.700461
|
{
"authors": [
"codecov-commenter",
"usommerl"
],
"repo": "usommerl/graalnative4s",
"url": "https://github.com/usommerl/graalnative4s/pull/410",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1064181037
|
SILSのみではなく,ある特定のHW上でうごくuser sampleを公開する
概要
SILSのみではなく,ある特定のHW上でうごくuser sampleを公開する
詳細
Raspberry Piとかで.ベアメタルじゃないけど.
PICもそのうち公開したいな.
close条件
公開したら
したんだわ.
https://github.com/arkedge/c2a-user-for-raspi
|
gharchive/issue
| 2021-11-26T07:37:35 |
2025-04-01T06:40:49.736672
|
{
"authors": [
"meltingrabbit"
],
"repo": "ut-issl/c2a-core",
"url": "https://github.com/ut-issl/c2a-core/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
777418723
|
EGAT train.py error, size mismatch problem
All i edited is : In "extract_semi_template_pattern.py": deleting "exit(1)" to get [”patten_feat“] in the graph.
Then i run the train.py, and got this size mismatch error
Traceback (most recent call last):
File "train.py", line 322, in
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/disk3/zzp/anaconda3/envs/Retro/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/disk3/zzp/IntelligentDesign/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/disk3/zzp/anaconda3/envs/Retro/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/disk3/zzp/IntelligentDesign/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/disk3/zzp/IntelligentDesign/RetroXpert/model/gat.py", line 72, in
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/disk3/zzp/anaconda3/envs/Retro/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/disk3/zzp/IntelligentDesign/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/disk3/zzp/anaconda3/envs/Retro/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/disk3/zzp/anaconda3/envs/Retro/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/disk3/zzp/anaconda3/envs/Retro/lib/python3.6/site-packages/torch/nn/functional.py", line 1371, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [809 x 57], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:273
my package version are just like requirements.txt told:
python 3.6
torch 1.2.0
dgl-cu10.2 0.4.2
Please keep "exit(1)", and run twice.
Please keep "exit(1)", and run twice.
Please keep "exit(1)", and run twice following the instructions.
# extract semi-tempaltes for training data
python extract_template_pattern.py --extract_pattern
# find semi-template patterns for all data
python extract_template_pattern.py
Oh, at first I thought these two commands were alternative.
i got what your code means. It works well like the notes you wrote in the code now. Wonderful job!
Please keep "exit(1)", and run twice following the instructions.
# extract semi-tempaltes for training data
python extract_template_pattern.py --extract_pattern
# find semi-template patterns for all data
python extract_template_pattern.py
Oh, at first I thought these two commands were alternative.
i got what your code means. It works well like the notes you wrote in the code now. Wonderful job!
Please keep "exit(1)", and run twice following the instructions.
# extract semi-tempaltes for training data
python extract_template_pattern.py --extract_pattern
# find semi-template patterns for all data
python extract_template_pattern.py
I can't find the extract_template_pattern.py. Do you mean extract_semi_template_pattern.py?
Please keep "exit(1)", and run twice following the instructions.
# extract semi-tempaltes for training data
python extract_template_pattern.py --extract_pattern
# find semi-template patterns for all data
python extract_template_pattern.py
I can't find the extract_template_pattern.py. Do you mean extract_semi_template_pattern.py?
Yes, it is extract_semi_template_pattern.py. Sorry for the typo.
Yes, it is extract_semi_template_pattern.py. Sorry for the typo.
@chaoyan1037 Hi, I didn't make any change to the code, but I am also facing the same error.
Namespace(batch_size=32, dataset='USPTO50K', epochs=80, exp_name='USPTO50K_typed', gat_layers=3, heads=4, hidden_dim=128, in_dim=703, load=False, logdir='logs', lr=0.0005, seed=123, test_on_train=False, test_only=False, typed=True, use_cpu=False, valid_only=False)
Counter({1: 3482, 0: 1415, 2: 102, 9: 1, 17: 1})
Counter({1: 27851, 0: 11296, 2: 849, 3: 4, 4: 4, 10: 2, 7: 1, 13: 1})
Epoch 1: 0%| | 0/1251 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 322, in <module>
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in <lambda>
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [844 x 704], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
How can I fix this? thx!
@chaoyan1037 Hi, I didn't make any change to the code, but I am also facing the same error.
Namespace(batch_size=32, dataset='USPTO50K', epochs=80, exp_name='USPTO50K_typed', gat_layers=3, heads=4, hidden_dim=128, in_dim=703, load=False, logdir='logs', lr=0.0005, seed=123, test_on_train=False, test_only=False, typed=True, use_cpu=False, valid_only=False)
Counter({1: 3482, 0: 1415, 2: 102, 9: 1, 17: 1})
Counter({1: 27851, 0: 11296, 2: 849, 3: 4, 4: 4, 10: 2, 7: 1, 13: 1})
Epoch 1: 0%| | 0/1251 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 322, in <module>
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in <lambda>
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [844 x 704], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
How can I fix this? thx!
@chaoyan1037 Hi, I didn't make any change to the code, but I am also facing the same error.
Namespace(batch_size=32, dataset='USPTO50K', epochs=80, exp_name='USPTO50K_typed', gat_layers=3, heads=4, hidden_dim=128, in_dim=703, load=False, logdir='logs', lr=0.0005, seed=123, test_on_train=False, test_only=False, typed=True, use_cpu=False, valid_only=False)
Counter({1: 3482, 0: 1415, 2: 102, 9: 1, 17: 1})
Counter({1: 27851, 0: 11296, 2: 849, 3: 4, 4: 4, 10: 2, 7: 1, 13: 1})
Epoch 1: 0%| | 0/1251 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 322, in <module>
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in <lambda>
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [844 x 704], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
How can I fix this? thx!
modify the in_dim in "train.py" according to your number of extracted semi templates.
for example, if you got 655 semi templates, your in_dim should be 47 + 655.
@chaoyan1037 Hi, I didn't make any change to the code, but I am also facing the same error.
Namespace(batch_size=32, dataset='USPTO50K', epochs=80, exp_name='USPTO50K_typed', gat_layers=3, heads=4, hidden_dim=128, in_dim=703, load=False, logdir='logs', lr=0.0005, seed=123, test_on_train=False, test_only=False, typed=True, use_cpu=False, valid_only=False)
Counter({1: 3482, 0: 1415, 2: 102, 9: 1, 17: 1})
Counter({1: 27851, 0: 11296, 2: 849, 3: 4, 4: 4, 10: 2, 7: 1, 13: 1})
Epoch 1: 0%| | 0/1251 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 322, in <module>
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in <lambda>
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [844 x 704], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
How can I fix this? thx!
modify the in_dim in "train.py" according to your number of extracted semi templates.
for example, if you got 655 semi templates, your in_dim should be 47 + 655.
@chaoyan1037 Hi, I didn't make any change to the code, but I am also facing the same error.
Namespace(batch_size=32, dataset='USPTO50K', epochs=80, exp_name='USPTO50K_typed', gat_layers=3, heads=4, hidden_dim=128, in_dim=703, load=False, logdir='logs', lr=0.0005, seed=123, test_on_train=False, test_only=False, typed=True, use_cpu=False, valid_only=False)
Counter({1: 3482, 0: 1415, 2: 102, 9: 1, 17: 1})
Counter({1: 27851, 0: 11296, 2: 849, 3: 4, 4: 4, 10: 2, 7: 1, 13: 1})
Epoch 1: 0%| | 0/1251 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 322, in <module>
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in <lambda>
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [844 x 704], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
How can I fix this? thx!
modify the in_dim in "train.py" according to your number of extracted semi templates.
for example, if you got 655 semi templates, your in_dim should be 47 + 655.
Thanks, it works.
@chaoyan1037 Hi, I didn't make any change to the code, but I am also facing the same error.
Namespace(batch_size=32, dataset='USPTO50K', epochs=80, exp_name='USPTO50K_typed', gat_layers=3, heads=4, hidden_dim=128, in_dim=703, load=False, logdir='logs', lr=0.0005, seed=123, test_on_train=False, test_only=False, typed=True, use_cpu=False, valid_only=False)
Counter({1: 3482, 0: 1415, 2: 102, 9: 1, 17: 1})
Counter({1: 27851, 0: 11296, 2: 849, 3: 4, 4: 4, 10: 2, 7: 1, 13: 1})
Epoch 1: 0%| | 0/1251 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 322, in <module>
h_pred, e_pred = GAT_model(g_dgl, x_atom)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 121, in forward
h, _ = self.gat[l](g, h, merge='flatten')
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in forward
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/RetroXpert/model/gat.py", line 72, in <lambda>
outs = list(map(lambda x: x(g, h), self.heads))
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/RetroXpert/model/gat.py", line 45, in forward
g.ndata['h'] = self.embed_node(h)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/xiepengyu/miniconda3/envs/nips/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [844 x 704], m2: [703 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
How can I fix this? thx!
modify the in_dim in "train.py" according to your number of extracted semi templates.
for example, if you got 655 semi templates, your in_dim should be 47 + 655.
Thanks, it works.
|
gharchive/issue
| 2021-01-02T07:10:57 |
2025-04-01T06:40:49.762974
|
{
"authors": [
"chaoyan1037",
"iamxpy",
"otori-bird"
],
"repo": "uta-smile/RetroXpert",
"url": "https://github.com/uta-smile/RetroXpert/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
785493112
|
JIT broken on iOS 14.4
Describe the issue
After updating to iOS 14.4 beta 2, JIT no longer works. Trying to launch a VM results in a black screen.
Configuration (required)
UTM Version: 2.0.15
OS Version: iOS 14.4 beta 2 (18D5043d)
Device Model: iPhone XR
Is it jailbroken? No
How did you install UTM? Signed with my zsign fork using paid developer cert and installed OTA from itms-services link
Debug log
-[CSConnection init]:251
2021-01-13 23:03:18.432 UTM[1989:117355] -[CSConnection init]:251
Running: -L /private/var/containers/Bundle/Application/0FF084CB-2AD5-45A6-81F5-F74C2BD3F640/UTM.app/qemu -S -qmp tcp:127.0.0.1:4000,server,nowait -spice port=4001,addr=127.0.0.1,disable-ticketing,image-compression=off,playback-compression=off,streaming-video=off -smp cpus=1,sockets=1,cores=1,threads=1 -machine pc, -accel tcg,split-wx=on -vga qxl -global PIIX4_PM.disable_s3=1 -global ICH9-LPC.disable_s3=1 -boot order=d -m 512 -soundhw ac97 -name "Virtual Machine" -device usb-ehci -device usb-tablet -device usb-mouse -device usb-kbd -drive "if=ide,media=cdrom,id=drive0,file=/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso,cache=writethrough" -device rtl8139,netdev=net0 -netdev user,id=net0 -device virtio-serial -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev spicevmc,id=vdagent,debug=0,name=vdagent -uuid 2A1CB0FA-8363-47EF-8E5D-8EBD28C8E7F9 -rtc base=localtime
Loading libqemu-x86_64-softmmu.utm.dylib
2021-01-13 23:03:18.531 UTM[1989:117355] Running: -L /private/var/containers/Bundle/Application/0FF084CB-2AD5-45A6-81F5-F74C2BD3F640/UTM.app/qemu -S -qmp tcp:127.0.0.1:4000,server,nowait -spice port=4001,addr=127.0.0.1,disable-ticketing,image-compression=off,playback-compression=off,streaming-video=off -smp cpus=1,sockets=1,cores=1,threads=1 -machine pc, -accel tcg,split-wx=on -vga qxl -global PIIX4_PM.disable_s3=1 -global ICH9-LPC.disable_s3=1 -boot order=d -m 512 -soundhw ac97 -name "Virtual Machine" -device usb-ehci -device usb-tablet -device usb-mouse -device usb-kbd -drive "if=ide,media=cdrom,id=drive0,file=/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso,cache=writethrough" -device rtl8139,netdev=net0 -netdev user,id=net0 -device virtio-serial -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev spicevmc,id=vdagent,debug=0,name=vdagent -uuid 2A1CB0FA-8363-47EF-8E5D-8EBD28C8E7F9 -rtc base=localtime
2021-01-13 23:03:18.531 UTM[1989:117355] Loading libqemu-x86_64-softmmu.utm.dylib
SPICE port not in use yet, retries left: 29
2021-01-13 23:03:18.657 UTM[1989:117362] SPICE port not in use yet, retries left: 29
Stream error Error Domain=NSPOSIXErrorDomain Code=61 "Connection refused" UserInfo={_kCFStreamErrorCodeKey=61, _kCFStreamErrorDomainKey=1}
QMP stream error seen: Error Domain=NSPOSIXErrorDomain Code=61 "Connection refused" UserInfo={_kCFStreamErrorCodeKey=61, _kCFStreamErrorDomainKey=1}
QMP connection failed, retries left: 29
2021-01-13 23:03:18.662 UTM[1989:117361] Stream error Error Domain=NSPOSIXErrorDomain Code=61 "Connection refused" UserInfo={_kCFStreamErrorCodeKey=61, _kCFStreamErrorDomainKey=1}
2021-01-13 23:03:18.662 UTM[1989:117361] QMP stream error seen: Error Domain=NSPOSIXErrorDomain Code=61 "Connection refused" UserInfo={_kCFStreamErrorCodeKey=61, _kCFStreamErrorDomainKey=1}
2021-01-13 23:03:18.662 UTM[1989:117361] QMP connection failed, retries left: 29
qemu-system: info: Spice: reds.c:4214:spice_server_set_seamless_migration: seamless migration enabled=0
qemu-system: info: Spice: reds.c:3359:do_spice_init: starting 0.14.1
qemu-system: info: Spice: char-device.c:690:red_char_device_reset_dev_instance: sin 0x0, char device 0x1605ec110
qemu-system: info: Spice: reds.c:2561:reds_init_socket: bound to 127.0.0.1:4001
qemu-system: info: Spice: main:0 (0x1605f4070): thread_id 0x16e48f000
qemu-system: info: Spice: inputs:0 (0x1605f0140): thread_id 0x16e48f000
qemu-system: info: Spice: reds.c:3297:spice_server_add_interface: SPICE_INTERFACE_MIGRATION
qemu-system: info: Spice: reds.c:3214:spice_server_add_interface: SPICE_INTERFACE_KEYBOARD
qemu-system: info: Spice: reds.c:3224:spice_server_add_interface: SPICE_INTERFACE_MOUSE
qemu-system: info: Spice: reds.c:3236:spice_server_add_interface: SPICE_INTERFACE_QXL
qemu-system: info: Spice: cursor-channel.c:238:cursor_channel_new: create cursor channel
qemu-system: info: Spice: cursor:0 (0x160564d40): thread_id 0x16e48f000
qemu-system: info: Spice: display-channel.c:2235:display_channel_new: create display channel
qemu-system: info: Spice: display:0 (0x1605f4130): thread_id 0x16e48f000
qemu-system: info: Spice: display-channel.c:236:display_channel_set_stream_video: sv off
qemu-system: info: Spice: red-worker.c:1360:red_worker_main: begin
qemu-system: warning: '-soundhw ac97' is deprecated, please use '-device AC97' instead
qemu-system: info: Spice: reds.c:3280:spice_server_add_interface: SPICE_INTERFACE_RECORD
qemu-system: info: Spice: record:0 (0x1605f41d0): thread_id 0x16e48f000
qemu-system: info: Spice: reds.c:3271:spice_server_add_interface: SPICE_INTERFACE_PLAYBACK
qemu-system: info: Spice: playback:0 (0x1605f4280): thread_id 0x16e48f000
qemu-system: info: Spice: display-channel.c:2090:display_channel_destroy_surfaces: trace
qemu-system: info: Spice: red-worker.c:490:dev_create_primary_surface: trace
qemu-system: info: Spice: display-channel.c:181:monitors_config_debug: monitors config count:1 max:1
qemu-system: info: Spice: display-channel.c:185:monitors_config_debug: +0+0 640x480
qemu-system: info: Spice: cursor-channel.c:318:cursor_channel_init_client: during_target_migrate: skip init
qemu-system: info: Spice: red-worker.c:490:dev_create_primary_surface: trace
qemu-system: info: Spice: display-channel.c:173:monitors_config_unref: freeing monitors config
qemu-system: info: Spice: display-channel.c:181:monitors_config_debug: monitors config count:1 max:1
qemu-system: info: Spice: display-channel.c:185:monitors_config_debug: +0+0 640x480
qemu-system: info: Spice: cursor-channel.c:318:cursor_channel_init_client: during_target_migrate: skip init
Connected to stream
QMP connection successful! (readStream:1)
2021-01-13 23:03:19.701 UTM[1989:117361] Connected to stream
2021-01-13 23:03:19.701 UTM[1989:117361] QMP connection successful! (readStream:1)
Debug JSON recieved <- {
QMP = {
capabilities = (
oob
);
version = {
package = "";
qemu = {
major = 5;
micro = 0;
minor = 2;
};
};
};
}
Connected to stream
QMP connection successful! (readStream:0)
2021-01-13 23:03:19.702 UTM[1989:117361] Debug JSON recieved <- {
QMP = {
capabilities = (
oob
);
version = {
package = "";
qemu = {
major = 5;
micro = 0;
minor = 2;
};
};
};
}
2021-01-13 23:03:19.702 UTM[1989:117361] Connected to stream
Got QMP handshake: {
QMP = {
capabilities = (
oob
);
version = {
package = "";
qemu = {
major = 5;
micro = 0;
minor = 2;
};
};
};
}
2021-01-13 23:03:19.702 UTM[1989:117361] QMP connection successful! (readStream:0)
2021-01-13 23:03:19.702 UTM[1989:117364] Got QMP handshake: {
QMP = {
capabilities = (
oob
);
version = {
package = "";
qemu = {
major = 5;
micro = 0;
minor = 2;
};
};
};
}
Debug JSON send -> {
execute = "qmp_capabilities";
}
2021-01-13 23:03:19.702 UTM[1989:117364] Debug JSON send -> {
execute = "qmp_capabilities";
}
Debug JSON recieved <- {
return = {
};
}
2021-01-13 23:03:19.705 UTM[1989:117360] Debug JSON recieved <- {
return = {
};
}
qemuQmpDidConnect
2021-01-13 23:03:19.705 UTM[1989:117364] qemuQmpDidConnect
Debug JSON send -> {
execute = "query-block";
}
2021-01-13 23:03:19.708 UTM[1989:117364] Debug JSON send -> {
execute = "query-block";
}
-[CSSession initWithSession:]:294
shared directory disabled
2021-01-13 23:03:19.709 UTM[1989:117366] -[CSSession initWithSession:]:294
2021-01-13 23:03:19.709 UTM[1989:117366] shared directory disabled
qemu-system: info: GSpice: spice-session.c:1802 no migration in progress
Debug JSON recieved <- {
return = (
{
device = drive0;
inserted = {
"backing_file_depth" = 0;
bps = 0;
"bps_rd" = 0;
"bps_wr" = 0;
cache = {
direct = 0;
"no-flush" = 0;
writeback = 0;
};
"detect_zeroes" = off;
drv = raw;
encrypted = 0;
"encryption_key_missing" = 0;
file = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
image = {
"actual-size" = 132120576;
"dirty-flag" = 0;
filename = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
format = raw;
"virtual-size" = 132120576;
};
iops = 0;
"iops_rd" = 0;
"iops_wr" = 0;
"node-name" = "#block135";
ro = 1;
"write_threshold" = 0;
};
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[22]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = "ide1-cd0";
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[23]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = floppy0;
locked = 0;
qdev = "/machine/unattached/device[16]";
removable = 1;
type = unknown;
},
{
device = sd0;
locked = 0;
removable = 1;
type = unknown;
}
);
}
2021-01-13 23:03:19.712 UTM[1989:117360] Debug JSON recieved <- {
return = (
{
device = drive0;
inserted = {
"backing_file_depth" = 0;
bps = 0;
"bps_rd" = 0;
"bps_wr" = 0;
cache = {
direct = 0;
"no-flush" = 0;
writeback = 0;
};
"detect_zeroes" = off;
drv = raw;
encrypted = 0;
"encryption_key_missing" = 0;
file = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
image = {
"actual-size" = 132120576;
"dirty-flag" = 0;
filename = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
format = raw;
"virtual-size" = 132120576;
};
iops = 0;
"iops_rd" = 0;
"iops_wr" = 0;
"node-name" = "#block135";
ro = 1;
"write_threshold" = 0;
};
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[22]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = "ide1-cd0";
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[23]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = floppy0;
locked = 0;
qdev = "/machine/unattached/device[16]";
removable = 1;
type = unknown;
},
{
device = sd0;
locked = 0;
removable = 1;
type = unknown;
}
);
}
Debug JSON send -> {
execute = cont;
}
2021-01-13 23:03:19.715 UTM[1989:117364] Debug JSON send -> {
execute = cont;
}
Debug JSON recieved <- {
event = RESUME;
timestamp = {
microseconds = 716496;
seconds = 1610575399;
};
}
Debug JSON recieved <- {
return = {
};
}
qemuHasResumed
2021-01-13 23:03:19.717 UTM[1989:117360] Debug JSON recieved <- {
event = RESUME;
timestamp = {
microseconds = 716496;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.717 UTM[1989:117360] Debug JSON recieved <- {
return = {
};
}
2021-01-13 23:03:19.717 UTM[1989:117362] qemuHasResumed
qemu-system: info: GSpice: spice-channel.c:141 main-1:0: spice_channel_constructed
qemu-system: info: GSpice: spice-session.c:2282 main-1:0: new main channel, switching
2021-01-13 22:03:19,740 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:134 new channel (#0)
2021-01-13 22:03:19,741 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:137 new main channel
cs_channel_new:140
2021-01-13 23:03:19.741 UTM[1989:117366] cs_channel_new:140
2021-01-13 22:03:19,742 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSSession.m:237 Changing main channel from 0x0 to 0x15e026500
qemu-system: info: GSpice: spice-channel.c:2707 main-1:0: Open coroutine starting 0x15e026500
qemu-system: info: GSpice: spice-channel.c:2544 main-1:0: Started background coroutine 0x15e024030
qemu-system: info: GSpice: spice-session.c:2234 main-1:0: Using plain text, port 4001
qemu-system: info: GSpice: spice-session.c:2165 open host 127.0.0.1:4001
qemu-system: info: GSpice: spice-session.c:2087 main-1:0: connecting 0x149786c08...
qemu-system: info: GSpice: spice-session.c:2071 main-1:0: connect ready
Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49541;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 746271;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.746 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49541;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 746271;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: spice-channel.c:1367 main-1:0: channel type 1 id 0 num common caps 1 num caps 1
qemu-system: info: GSpice: spice-channel.c:1391 main-1:0: Peer version: 2:2
qemu-system: info: GSpice: spice-channel.c:1947 main-1:0: spice_channel_recv_link_msg: 2 caps
qemu-system: info: GSpice: spice-channel.c:1961 main-1:0: got remote common caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xB
qemu-system: info: GSpice: spice-channel.c:1968 main-1:0: got remote channel caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0x9
qemu-system: info: GSpice: spice-channel.c:2926 test cap 0 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0xB: yes
qemu-system: info: Spice: reds.c:2159:reds_handle_auth_mechanism: Auth method: 1
qemu-system: info: GSpice: spice-channel.c:2926 test cap 3 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2000 main-1:0: use mini header: 1
qemu-system: info: Spice: reds.c:1789:reds_handle_main_link: trace
qemu-system: info: Spice: reds.c:603:reds_disconnect: trace
qemu-system: info: Spice: reds.c:1642:reds_info_new_channel: channel 1:0, connected successfully, over Non Secure link
2021-01-13 22:03:19,792 MESSAGE (null)-main channel: opened
qemu-system: info: Spice: red_channel_client_class_init
qemu-system: info: Spice: reds.c:1828:reds_handle_main_link: NEW Client 0x16059b000 mcc 0x16125e9e0 connect-id 16807
qemu-system: info: GSpice: spice-channel.c:1298 main-1:0: channel up, state 3
Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 1;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49541;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 792381;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.792 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 1;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49541;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 792381;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: spice-session.c:2386 set mm time: 9702379
qemu-system: info: GSpice: spice-session.c:2389 spice_session_set_mm_time: mm-time-reset, old 9702780, new 9702379
qemu-system: info: GSpice: channel-main.c:1693 server name: Virtual Machine
qemu-system: info: GSpice: channel-main.c:1704 server uuid: 2a1cb0fa-8363-47ef-8e5d-8ebd28c8e7f9
qemu-system: info: Spice: main:0 (0x1605f4070): net test: latency 1.149000 ms, bitrate 11838150289 bps (11289.739884 Mbps)
qemu-system: info: Spice: red-channel-client.c:792:red_channel_client_start_connectivity_monitoring: trace
qemu-system: info: GSpice: spice-channel.c:141 playback-5:0: spice_channel_constructed
2021-01-13 22:03:19,794 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:134 new channel (#0)
2021-01-13 22:03:19,794 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:154 new audio channel
2021-01-13 22:03:19,794 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:134 new channel (#0)
qemu-system: info: GSpice: spice-channel.c:141 record-6:0: spice_channel_constructed
qemu-system: info: Spice: channel-display-gst.c:718:gstvideo_debug_available_decoders: From 1 video decoder elements, 1 can handle caps image/jpeg: jpegdec
qemu-system: info: Spice: channel-display-gst.c:748:gstvideo_has_codec: From 1 decoders, none can handle 'video/x-vp8'
qemu-system: info: GSpice: channel-display.c:894 GStreamer does not support the vp8 codec
qemu-system: info: Spice: channel-display-gst.c:748:gstvideo_has_codec: From 1 decoders, none can handle 'video/x-h264'
qemu-system: info: GSpice: channel-display.c:894 GStreamer does not support the h264 codec
qemu-system: info: Spice: channel-display-gst.c:748:gstvideo_has_codec: From 1 decoders, none can handle 'video/x-vp9'
qemu-system: info: GSpice: channel-display.c:894 GStreamer does not support the vp9 codec
qemu-system: info: Spice: channel-display-gst.c:748:gstvideo_has_codec: From 1 decoders, none can handle 'video/x-h265'
qemu-system: info: GSpice: channel-display.c:894 GStreamer does not support the h265 codec
cs_channel_new:147
qemu-system: info: GSpice: spice-channel.c:141 display-2:0: spice_channel_constructed
2021-01-13 22:03:19,795 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:134 new channel (#0)
2021-01-13 22:03:19,795 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:146 new display channel (#0)
2021-01-13 23:03:19.795 UTM[1989:117495] cs_channel_new:147
qemu-system: info: GSpice: spice-channel.c:141 cursor-4:0: spice_channel_constructed
2021-01-13 22:03:19,795 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:134 new channel (#0)
qemu-system: info: GSpice: spice-channel.c:141 inputs-3:0: spice_channel_constructed
2021-01-13 22:03:19,795 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:134 new channel (#0)
qemu-system: info: GSpice: spice-channel.c:2707 playback-5:0: Open coroutine starting 0x15c558d70
qemu-system: info: GSpice: spice-channel.c:2544 playback-5:0: Started background coroutine 0x15c5568a0
qemu-system: info: GSpice: spice-session.c:2234 playback-5:0: Using plain text, port 4001
qemu-system: info: GSpice: spice-channel.c:2707 record-6:0: Open coroutine starting 0x15c992370
qemu-system: info: GSpice: spice-channel.c:2544 record-6:0: Started background coroutine 0x15c98fea0
qemu-system: info: GSpice: spice-session.c:2234 record-6:0: Using plain text, port 4001
qemu-system: info: GSpice: spice-channel.c:2707 display-2:0: Open coroutine starting 0x15c55b3b0
qemu-system: info: GSpice: spice-channel.c:2544 display-2:0: Started background coroutine 0x15c558ee0
qemu-system: info: GSpice: spice-session.c:2234 display-2:0: Using plain text, port 4001
qemu-system: info: GSpice: spice-session.c:2165 open host 127.0.0.1:4001
qemu-system: info: GSpice: spice-session.c:2087 playback-5:0: connecting 0x14a786c08...
qemu-system: info: GSpice: spice-session.c:2165 open host 127.0.0.1:4001
qemu-system: info: GSpice: spice-session.c:2087 record-6:0: connecting 0x14b786c08...
qemu-system: info: GSpice: spice-session.c:2165 open host 127.0.0.1:4001
qemu-system: info: GSpice: spice-session.c:2087 display-2:0: connecting 0x14c786c08...
qemu-system: info: GSpice: spice-session.c:2071 playback-5:0: connect ready
qemu-system: info: GSpice: spice-channel.c:1367 playback-5:0: channel type 5 id 0 num common caps 1 num caps 1
qemu-system: info: GSpice: spice-session.c:2071 record-6:0: connect ready
qemu-system: info: GSpice: spice-channel.c:1367 record-6:0: channel type 6 id 0 num common caps 1 num caps 1
qemu-system: info: GSpice: spice-session.c:2071 display-2:0: connect ready
qemu-system: info: GSpice: spice-channel.c:1367 display-2:0: channel type 2 id 0 num common caps 1 num caps 1
Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49542;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 796181;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.796 UTM[1989:117366] Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49542;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 796181;
seconds = 1610575399;
};
}
Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49543;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 796249;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.796 UTM[1989:117366] Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49543;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 796249;
seconds = 1610575399;
};
}
Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49544;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 828624;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: spice-channel.c:1391 playback-5:0: Peer version: 2:2
qemu-system: info: GSpice: spice-channel.c:1947 playback-5:0: spice_channel_recv_link_msg: 2 caps
qemu-system: info: GSpice: spice-channel.c:1961 playback-5:0: got remote common caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xB
qemu-system: info: GSpice: spice-channel.c:1968 playback-5:0: got remote channel caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xA
qemu-system: info: GSpice: spice-channel.c:2926 test cap 0 in 0xB: yes
2021-01-13 23:03:19.828 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49544;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 828624;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 3 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2000 playback-5:0: use mini header: 1
qemu-system: info: Spice: reds.c:2159:reds_handle_auth_mechanism: Auth method: 1
qemu-system: info: GSpice: spice-channel.c:1391 display-2:0: Peer version: 2:2
qemu-system: info: GSpice: spice-channel.c:1947 display-2:0: spice_channel_recv_link_msg: 2 caps
qemu-system: info: GSpice: spice-channel.c:1961 display-2:0: got remote common caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xB
qemu-system: info: GSpice: spice-channel.c:1968 display-2:0: got remote channel caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0x1052
qemu-system: info: GSpice: spice-channel.c:2926 test cap 0 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 3 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2000 display-2:0: use mini header: 1
qemu-system: info: Spice: reds.c:1642:reds_info_new_channel: channel 5:0, connected successfully, over Non Secure link
qemu-system: info: Spice: sound.c:1085:playback_channel_client_constructed: playback client 0x15c5633a0 using mode raw
qemu-system: info: GSpice: spice-channel.c:1298 playback-5:0: channel up, state 3
qemu-system: info: GSpice: channel-playback.c:345 playback-5:0: playback_handle_mode: time 9702831 mode 1 data 0x280448a16 size 0
Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 5;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49542;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 844461;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.845 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 5;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49542;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 844461;
seconds = 1610575399;
};
}
qemu-system: info: Spice: reds.c:2159:reds_handle_auth_mechanism: Auth method: 1
qemu-system: info: GSpice: spice-channel.c:1391 record-6:0: Peer version: 2:2
qemu-system: info: GSpice: spice-channel.c:1947 record-6:0: spice_channel_recv_link_msg: 2 caps
qemu-system: info: GSpice: spice-channel.c:1961 record-6:0: got remote common caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xB
qemu-system: info: GSpice: spice-channel.c:1968 record-6:0: got remote channel caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0x6
qemu-system: info: GSpice: spice-channel.c:2926 test cap 0 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 3 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2000 record-6:0: use mini header: 1
qemu-system: info: Spice: reds.c:1642:reds_info_new_channel: channel 2:0, connected successfully, over Non Secure link
qemu-system: info: Spice: red-qxl.c:80:red_qxl_set_display_peer:
qemu-system: info: Spice: reds.c:2159:reds_handle_auth_mechanism: Auth method: 1
qemu-system: info: GSpice: spice-channel.c:1298 display-2:0: channel up, state 3
qemu-system: info: GSpice: channel-display.c:1069 display-2:0: spice_display_channel_up: cache_size 83886080, glz_window_size 25161728 (bytes)
Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 2;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49544;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 858443;
seconds = 1610575399;
};
}
qemu-system: info: Spice: red-worker.c:719:handle_dev_display_connect: connect new client
2021-01-13 23:03:19.859 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 2;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49544;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 858443;
seconds = 1610575399;
};
}
qemu-system: info: Spice: reds.c:1642:reds_info_new_channel: channel 6:0, connected successfully, over Non Secure link
Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 6;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49543;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 859828;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.860 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 6;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49543;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 859828;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: spice-channel.c:1298 record-6:0: channel up, state 3
qemu-system: info: Spice: dcc.c:518:dcc_new: New display (client 0x16059b000) dcc 0x161254890 stream 0x282825300
qemu-system: info: Spice: display-channel.c:2374:display_channel_update_compression: jpeg disabled
qemu-system: info: Spice: display-channel.c:2375:display_channel_update_compression: zlib-over-glz disabled
qemu-system: info: Spice: image-encoders.c:734:create_glz_dictionary: Lz Window 1 Size=6290432
qemu-system: info: Spice: dcc.c:551:display_channel_client_wait_for_init: creating encoder with id == 0
qemu-system: info: GSpice: channel-display.c:1899 surface flags: 1
qemu-system: info: GSpice: channel-display.c:947 display-2:0: Create primary canvas
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0x1052: yes
qemu-system: info: GSpice: channel-display.c:1969 display-2:0: received new monitors config from guest: n: 1/1
qemu-system: info: GSpice: channel-display.c:1989 display-2:0: monitor id: 0, surface id: 0, +0+0-640x480
-[CSDisplayMetal initWithSession:channelID:monitorID:]:304
cs_channel_new:178
2021-01-13 23:03:19.863 UTM[1989:117495] -[CSDisplayMetal initWithSession:channelID:monitorID:]:304
2021-01-13 23:03:19.863 UTM[1989:117495] cs_channel_new:178
qemu-system: info: GSpice: channel-display.c:559 display-2:0: get primary 0x161420000
2021-01-13 22:03:19,863 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:118 0:0 update monitor area
-[CSInput initWithSession:channelID:monitorID:]:435
2021-01-13 22:03:19,864 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:64 0:0 mouse mode 1
2021-01-13 23:03:19.863 UTM[1989:117495] -[CSInput initWithSession:channelID:monitorID:]:435
qemu-system: info: GSpice: channel-display.c:1123 display-2:0: display_handle_mark
qemu-system: info: GSpice: spice-channel.c:2707 inputs-3:0: Open coroutine starting 0x15c561f60
qemu-system: info: GSpice: spice-channel.c:2544 inputs-3:0: Started background coroutine 0x15c55fa90
qemu-system: info: GSpice: spice-session.c:2234 inputs-3:0: Using plain text, port 4001
qemu-system: info: GSpice: spice-channel.c:2707 cursor-4:0: Open coroutine starting 0x15c55f960
qemu-system: info: GSpice: spice-channel.c:2544 cursor-4:0: Started background coroutine 0x15c55d490
qemu-system: info: GSpice: spice-session.c:2234 cursor-4:0: Using plain text, port 4001
qemu-system: info: GSpice: spice-session.c:2165 open host 127.0.0.1:4001
qemu-system: info: GSpice: spice-session.c:2087 inputs-3:0: connecting 0x14d786c08...
qemu-system: info: GSpice: spice-session.c:2165 open host 127.0.0.1:4001
qemu-system: info: GSpice: spice-session.c:2087 cursor-4:0: connecting 0x14e786c08...
resizing to (414.000000, 896.000000)
2021-01-13 23:03:19.866 UTM[1989:117355] resizing to (414.000000, 896.000000)
qemu-system: GSpice: spice_main_channel_send_monitor_config: assertion 'c->agent_connected' failed
qemu-system: info: GSpice: spice-session.c:2071 inputs-3:0: connect ready
qemu-system: info: GSpice: spice-channel.c:1367 inputs-3:0: channel type 3 id 0 num common caps 1 num caps 0
qemu-system: info: GSpice: spice-session.c:2071 cursor-4:0: connect ready
qemu-system: info: GSpice: spice-channel.c:1367 cursor-4:0: channel type 4 id 0 num common caps 1 num caps 0
Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49545;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 866175;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.866 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49545;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 866175;
seconds = 1610575399;
};
}
Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49546;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 866295;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.866 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
family = ipv4;
host = "127.0.0.1";
port = 49546;
};
server = {
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_CONNECTED";
timestamp = {
microseconds = 866295;
seconds = 1610575399;
};
}
Debug JSON send -> {
execute = "query-block";
}
2021-01-13 23:03:19.867 UTM[1989:117355] Debug JSON send -> {
execute = "query-block";
}
qemu-system: info: GSpice: spice-channel.c:1391 inputs-3:0: Peer version: 2:2
qemu-system: info: GSpice: spice-channel.c:1947 inputs-3:0: spice_channel_recv_link_msg: 2 caps
qemu-system: info: GSpice: spice-channel.c:1961 inputs-3:0: got remote common caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xB
qemu-system: info: GSpice: spice-channel.c:1968 inputs-3:0: got remote channel caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0x1
qemu-system: info: GSpice: spice-channel.c:2926 test cap 0 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 3 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2000 inputs-3:0: use mini header: 1
qemu-system: info: Spice: reds.c:2159:reds_handle_auth_mechanism: Auth method: 1
qemu-system: info: GSpice: spice-channel.c:1391 cursor-4:0: Peer version: 2:2
qemu-system: info: GSpice: spice-channel.c:1947 cursor-4:0: spice_channel_recv_link_msg: 1 caps
qemu-system: info: GSpice: spice-channel.c:1961 cursor-4:0: got remote common caps:
qemu-system: info: GSpice: spice-channel.c:1897 0:0xB
qemu-system: info: GSpice: spice-channel.c:1968 cursor-4:0: got remote channel caps:
qemu-system: info: GSpice: spice-channel.c:2926 test cap 0 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 1 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2926 test cap 3 in 0xB: yes
qemu-system: info: GSpice: spice-channel.c:2000 cursor-4:0: use mini header: 1
qemu-system: info: Spice: reds.c:1642:reds_info_new_channel: channel 3:0, connected successfully, over Non Secure link
Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 3;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49545;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 903944;
seconds = 1610575399;
};
}
qemu-system: info: Spice: red-channel-client.c:1394:red_channel_client_handle_pong: update roundtrip 1.71(ms)
qemu-system: info: Spice: reds.c:2159:reds_handle_auth_mechanism: Auth method: 1
qemu-system: info: GSpice: channel-base.c:81 main-1:0: spice_channel_handle_notify -- warn!!! #0: keyboard channel is insecure
2021-01-13 23:03:19.904 UTM[1989:117364] Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 3;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49545;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 903944;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: spice-channel.c:1298 inputs-3:0: channel up, state 3
Debug JSON recieved <- {
return = (
{
device = drive0;
inserted = {
"backing_file_depth" = 0;
bps = 0;
"bps_rd" = 0;
"bps_wr" = 0;
cache = {
direct = 0;
"no-flush" = 0;
writeback = 0;
};
"detect_zeroes" = off;
drv = raw;
encrypted = 0;
"encryption_key_missing" = 0;
file = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
image = {
"actual-size" = 132120576;
"dirty-flag" = 0;
filename = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
format = raw;
"virtual-size" = 132120576;
};
iops = 0;
"iops_rd" = 0;
"iops_wr" = 0;
"node-name" = "#block135";
ro = 1;
"write_threshold" = 0;
};
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[22]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = "ide1-cd0";
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[23]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = floppy0;
locked = 0;
qdev = "/machine/unattached/device[16]";
removable = 1;
type = unknown;
},
{
device = sd0;
locked = 0;
removable = 1;
type = unknown;
}
);
}
2021-01-13 23:03:19.905 UTM[1989:117364] Debug JSON recieved <- {
return = (
{
device = drive0;
inserted = {
"backing_file_depth" = 0;
bps = 0;
"bps_rd" = 0;
"bps_wr" = 0;
cache = {
direct = 0;
"no-flush" = 0;
writeback = 0;
};
"detect_zeroes" = off;
drv = raw;
encrypted = 0;
"encryption_key_missing" = 0;
file = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
image = {
"actual-size" = 132120576;
"dirty-flag" = 0;
filename = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
format = raw;
"virtual-size" = 132120576;
};
iops = 0;
"iops_rd" = 0;
"iops_wr" = 0;
"node-name" = "#block135";
ro = 1;
"write_threshold" = 0;
};
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[22]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = "ide1-cd0";
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[23]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = floppy0;
locked = 0;
qdev = "/machine/unattached/device[16]";
removable = 1;
type = unknown;
},
{
device = sd0;
locked = 0;
removable = 1;
type = unknown;
}
);
}
qemu-system: info: Spice: reds.c:1642:reds_info_new_channel: channel 4:0, connected successfully, over Non Secure link
qemu-system: info: Spice: red-worker.c:815:handle_dev_cursor_connect: cursor connect
qemu-system: info: Spice: cursor-channel.c:349:cursor_channel_connect: add cursor channel client
qemu-system: info: GSpice: spice-channel.c:1298 cursor-4:0: channel up, state 3
Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 4;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49546;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 905635;
seconds = 1610575399;
};
}
2021-01-13 23:03:19.905 UTM[1989:117362] Debug JSON recieved <- {
data = {
client = {
"channel-id" = 0;
"channel-type" = 4;
"connection-id" = 16807;
family = ipv4;
host = "127.0.0.1";
port = 49546;
tls = 0;
};
server = {
auth = none;
family = ipv4;
host = "127.0.0.1";
port = 4001;
};
};
event = "SPICE_INITIALIZED";
timestamp = {
microseconds = 905635;
seconds = 1610575399;
};
}
qemu-system: info: GSpice: channel-cursor.c:387 cursor-4:0: set_cursor: flags 1, size 0
qemu-system: info: Spice: red-channel-client.c:1394:red_channel_client_handle_pong: update roundtrip 0.41(ms)
seen UIPasteboardChangedNotification
2021-01-13 23:03:20.867 UTM[1989:117355] seen UIPasteboardChangedNotification
seen UIPasteboardChangedNotification
2021-01-13 23:03:25.889 UTM[1989:117355] seen UIPasteboardChangedNotification
Debug JSON send -> {
execute = "query-block";
}
2021-01-13 23:03:26.303 UTM[1989:117355] Debug JSON send -> {
execute = "query-block";
}
Debug JSON recieved <- {
return = (
{
device = drive0;
inserted = {
"backing_file_depth" = 0;
bps = 0;
"bps_rd" = 0;
"bps_wr" = 0;
cache = {
direct = 0;
"no-flush" = 0;
writeback = 0;
};
"detect_zeroes" = off;
drv = raw;
encrypted = 0;
"encryption_key_missing" = 0;
file = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
image = {
"actual-size" = 132120576;
"dirty-flag" = 0;
filename = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
format = raw;
"virtual-size" = 132120576;
};
iops = 0;
"iops_rd" = 0;
"iops_wr" = 0;
"node-name" = "#block135";
ro = 1;
"write_threshold" = 0;
};
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[22]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = "ide1-cd0";
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[23]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = floppy0;
locked = 0;
qdev = "/machine/unattached/device[16]";
removable = 1;
type = unknown;
},
{
device = sd0;
locked = 0;
removable = 1;
type = unknown;
}
);
}
2021-01-13 23:03:26.303 UTM[1989:117361] Debug JSON recieved <- {
return = (
{
device = drive0;
inserted = {
"backing_file_depth" = 0;
bps = 0;
"bps_rd" = 0;
"bps_wr" = 0;
cache = {
direct = 0;
"no-flush" = 0;
writeback = 0;
};
"detect_zeroes" = off;
drv = raw;
encrypted = 0;
"encryption_key_missing" = 0;
file = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
image = {
"actual-size" = 132120576;
"dirty-flag" = 0;
filename = "/var/mobile/Containers/Data/Application/405ECBC3-E558-4764-9F51-4187DAFF9447/Documents/Virtual Machine.utm/Images/alpine-standard-3.12.3-x86_64.iso";
format = raw;
"virtual-size" = 132120576;
};
iops = 0;
"iops_rd" = 0;
"iops_wr" = 0;
"node-name" = "#block135";
ro = 1;
"write_threshold" = 0;
};
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[22]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = "ide1-cd0";
"io-status" = ok;
locked = 0;
qdev = "/machine/unattached/device[23]";
removable = 1;
"tray_open" = 0;
type = unknown;
},
{
device = floppy0;
locked = 0;
qdev = "/machine/unattached/device[16]";
removable = 1;
type = unknown;
},
{
device = sd0;
locked = 0;
removable = 1;
type = unknown;
}
);
}
Debug JSON send -> {
execute = quit;
}
2021-01-13 23:03:26.307 UTM[1989:117361] Debug JSON send -> {
execute = quit;
}
Debug JSON recieved <- {
return = {
};
}
Debug JSON recieved <- {
data = {
guest = 0;
reason = "host-qmp-quit";
};
event = SHUTDOWN;
timestamp = {
microseconds = 308275;
seconds = 1610575406;
};
}
2021-01-13 23:03:26.308 UTM[1989:117362] Debug JSON recieved <- {
return = {
};
}
2021-01-13 23:03:26.308 UTM[1989:117362] Debug JSON recieved <- {
data = {
guest = 0;
reason = "host-qmp-quit";
};
event = SHUTDOWN;
timestamp = {
microseconds = 308275;
seconds = 1610575406;
};
}
qemuWillQuit, reason = host-qmp-quit
2021-01-13 23:03:26.308 UTM[1989:117588] qemuWillQuit, reason = host-qmp-quit
qemu-system: info: GSpice: spice-session.c:1994 session: disconnecting 0
qemu-system: info: GSpice: spice-channel.c:2888 inputs-3:0: channel disconnect 0
2021-01-13 22:03:26,309 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:213 0:0 channel_destroy 0
2021-01-13 22:03:26,309 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:190 0:0 channel_destroy 0
qemu-system: info: GSpice: spice-channel.c:2680 inputs-3:0: Coroutine exit inputs-3:0
qemu-system: info: GSpice: spice-channel.c:2871 inputs-3:0: reset
qemu-system: info: GSpice: spice-channel.c:2819 inputs-3:0: channel reset
2021-01-13 22:03:26,309 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:213 0:0 channel_destroy 0
2021-01-13 22:03:26,310 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:190 0:0 channel_destroy 0
qemu-system: info: GSpice: spice-channel.c:2888 cursor-4:0: channel disconnect 0
qemu-system: info: GSpice: spice-channel.c:2680 cursor-4:0: Coroutine exit cursor-4:0
qemu-system: info: GSpice: spice-channel.c:2871 cursor-4:0: reset
qemu-system: info: GSpice: spice-channel.c:2819 cursor-4:0: channel reset
cs_channel_destroy:183
cs_channel_destroy:222
2021-01-13 22:03:26,310 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:184 zap display channel (#0)
2021-01-13 22:03:26,310 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:213 0:0 channel_destroy 0
2021-01-13 22:03:26,310 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:190 0:0 channel_destroy 0
2021-01-13 23:03:26.310 UTM[1989:117495] cs_channel_destroy:183
2021-01-13 23:03:26.310 UTM[1989:117495] cs_channel_destroy:222
qemu-system: info: GSpice: spice-channel.c:2888 display-2:0: channel disconnect 0
qemu-system: info: GSpice: spice-channel.c:2680 display-2:0: Coroutine exit display-2:0
qemu-system: info: GSpice: spice-channel.c:2871 display-2:0: reset
qemu-system: info: GSpice: channel-display.c:1034 display-2:0: keeping existing primary surface, migration or reset
qemu-system: info: GSpice: spice-channel.c:2819 display-2:0: channel reset
2021-01-13 22:03:26,310 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:213 0:0 channel_destroy 0
2021-01-13 22:03:26,310 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:190 0:0 channel_destroy 0
qemu-system: info: GSpice: spice-channel.c:2888 record-6:0: channel disconnect 0
qemu-system: info: Spice: dcc.c:1433:dcc_on_disconnect: trace
qemu-system: info: GSpice: spice-channel.c:2680 record-6:0: Coroutine exit record-6:0
qemu-system: info: GSpice: spice-channel.c:2871 record-6:0: reset
2021-01-13 22:03:26,311 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:191 zap audio channel
qemu-system: info: Spice: dcc.c:1445:dcc_on_disconnect: #draw=0, #glz_draw=0
2021-01-13 22:03:26,311 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:213 0:0 channel_destroy 0
2021-01-13 22:03:26,311 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:190 0:0 channel_destroy 0
qemu-system: info: GSpice: spice-channel.c:2888 playback-5:0: channel disconnect 0
qemu-system: info: GSpice: spice-channel.c:2680 playback-5:0: Coroutine exit playback-5:0
qemu-system: info: GSpice: spice-channel.c:2871 playback-5:0: reset
cs_channel_destroy:176
qemu-system: info: GSpice: spice-session.c:2322 main-1:0: the session lost the main channel
2021-01-13 23:03:26.311 UTM[1989:117495] cs_channel_destroy:176
2021-01-13 22:03:26,311 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSConnection.m:177 zap main channel
2021-01-13 22:03:26,311 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSDisplayMetal.m:213 0:0 channel_destroy 0
2021-01-13 22:03:26,311 DEBUG (null)-/Users/runner/work/UTM/UTM/CocoaSpice/CSInput.m:190 0:0 channel_destroy 0
qemu-system: info: GSpice: spice-channel.c:2888 main-1:0: channel disconnect 0
qemu-system: info: GSpice: spice-channel.c:2680 main-1:0: Coroutine exit main-1:0
qemu-system: info: GSpice: spice-channel.c:2871 main-1:0: reset
qemu-system: info: GSpice: channel-main.c:1551 agent connected: no
qemu-system: info: GSpice: spice-session.c:1802 no migration in progress
seen UIPasteboardChangedNotification
2021-01-13 23:03:26.867 UTM[1989:117355] seen UIPasteboardChangedNotification
qemu-system: GSpice: spice_main_channel_agent_test_capability: assertion 'SPICE_IS_MAIN_CHANNEL(channel)' failed
Entering background
2021-01-13 23:03:34.172 UTM[1989:117355] Entering background
Upload VM
https://drive.google.com/file/d/1AGt6deWsY4tuLuMW7Xw6_scSEhhzIqYS/view?usp=drivesdk
Can you try pressing the zoom button on the toolbar twice?
Can you try pressing the zoom button on the toolbar twice?
That doesn't seem to have any effect.
FWIW, people have also reported other apps such as Delta's DS emulation being broken by the new beta.
That doesn't seem to have any effect.
FWIW, people have also reported other apps such as Delta's DS emulation being broken by the new beta.
What’s weird is that your logs seem to show that it booted successfully so I thought maybe it’s just a display issue.
What’s weird is that your logs seem to show that it booted successfully so I thought maybe it’s just a display issue.
It appears that JIT worked < iOS 14.4 due to a bug/oversight. In iOS 14.4 beta 2, the application needs to be attached to a debugger for code signing to be disabled (just like < iOS 14.2). Since the ptrace() trick was also fixed, it seems unlikely that JIT will work on non-jailbroken devices (untethered) anytime soon. So don't update past iOS 14.3 if you care about JIT apps.
It appears that JIT worked < iOS 14.4 due to a bug/oversight. In iOS 14.4 beta 2, the application needs to be attached to a debugger for code signing to be disabled (just like < iOS 14.2). Since the ptrace() trick was also fixed, it seems unlikely that JIT will work on non-jailbroken devices (untethered) anytime soon. So don't update past iOS 14.3 if you care about JIT apps.
I'm assuming this breaks all VMs?
I'm assuming this breaks all VMs?
We can now jailbreak 14.4, using checkra1n, so I presume that we can update now?
We can now jailbreak 14.4, using checkra1n, so I presume that we can update now?
@brunocastello JIT was never supported without jailbreak on iOS 14.0 and higher on devices that checkra1n supports (A11 and older), but it continues to work with jailbreak, so if you're using one of the supported devices, feel free to update to 14.4.
This issue is regarding the JIT support that was present in iOS 14.2 and 14.3 for devices with A12 and newer chipsets, which are not and will never be supported by checkra1n, because it relies on a BootROM exploit that was patched in the newer chipsets.
@brunocastello JIT was never supported without jailbreak on iOS 14.0 and higher on devices that checkra1n supports (A11 and older), but it continues to work with jailbreak, so if you're using one of the supported devices, feel free to update to 14.4.
This issue is regarding the JIT support that was present in iOS 14.2 and 14.3 for devices with A12 and newer chipsets, which are not and will never be supported by checkra1n, because it relies on a BootROM exploit that was patched in the newer chipsets.
Oh, I see. Should work on my iPad Pro "12 2nd gen (2017) then. It's A10X and its already jailbroken with checkra1n on 14.2. Maybe there is no need to update for now, since things are working well and 14.4 adds next to nothing new for my device.
I guess that I will have to postpone my plans on getting a newer iPad Pro model this year if I want to continue using UTM on it.
Oh, I see. Should work on my iPad Pro "12 2nd gen (2017) then. It's A10X and its already jailbroken with checkra1n on 14.2. Maybe there is no need to update for now, since things are working well and 14.4 adds next to nothing new for my device.
I guess that I will have to postpone my plans on getting a newer iPad Pro model this year if I want to continue using UTM on it.
That reminds me, although it was already announced on Discord and Twitter, I should mention here that JIT is still broken in the final release of iOS 14.4. It seems like it was just a bug and Apple never intended to make it work without being connected to a debugger.
That reminds me, although it was already announced on Discord and Twitter, I should mention here that JIT is still broken in the final release of iOS 14.4. It seems like it was just a bug and Apple never intended to make it work without being connected to a debugger.
Why does this have a "wontfix" label?
Why does this have a "wontfix" label?
Because it won't be fixed.
Because it won't be fixed.
More precisely, it can't be fixed, because it depends on Apple. (Unless someone comes up with another crazy workaround, but the devs aren't going to spend time on that.)
More precisely, it can't be fixed, because it depends on Apple. (Unless someone comes up with another crazy workaround, but the devs aren't going to spend time on that.)
FYI, iOS 14.5 Beta 1 still in the same situation as 14.4.
Sorry, but did anyone with a jailbroken A10/A10X device on iOS 14.4 (checkra1n) managed to run it? I suppose that one can jailbreak with checkra1n ticking the box to allow unsupported iOS versions...
@brunocastello Yeah do that and get it from the Cydia repo
FYI, iOS 14.5 Beta 1 is still in the same situation as 14.4.
Have you tried any of the newer betas?
@jkcoxson Yes, I do. It still doesn't work on iPadOS 14.5 beta 3 (18E5164h).
Curses
I am still on iOS 13.5 🌝
Been wondering if I can still run UTM if I update my iPad Pro (A10X) to 14.5 and jailbreak it with checkra1n. My iPad is still running 14.2.
@brunocastello You cannot jailbreak iOS 14.5 with checkra1n right now. But once it gets updated, JIT should work fine.
@brunocastello You cannot jailbreak iOS 14.5 with checkra1n right now. But once it gets updated, JIT should work fine.
Apparently 14.4.2 is still being signed, if I can get it installed and jailbreak, should be fine too?
Yes, that should be fine. 14.5 is still in beta, so of course 14.4.2 is signed.
And I can’t get UTMapp
Sent from my iPad
I'll unfollow this discussion because it's no longer about the issue
Updated the title to reflect that the situation is unchanged. I have verified that the previous JIT workaround does not work on iOS 14.6 beta 3, still.
Closing due to the issue being well known and that for now, Jitterbug exists as a workaround.
|
gharchive/issue
| 2021-01-13T22:09:54 |
2025-04-01T06:40:49.815790
|
{
"authors": [
"MasterGamer000003",
"UInt2048",
"Zaq14rfv",
"aidenmitchell",
"brunocastello",
"conath",
"gabrc52",
"jkcoxson",
"kkk669",
"nyuszika7h",
"osy"
],
"repo": "utmapp/UTM",
"url": "https://github.com/utmapp/UTM/issues/2271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1378783763
|
Plasma desktop with GPU acceleration on UTM.HV crashes
Describe the issue
I'm using virtio-gpu-gl-pci and virtio-ramfb-gl on the Linux VM, neither of them work.
UTM will crash after KDE loading logo.
virtio-ramfb is working, but it doesn't have GPU acceleration.
In the log, it seems like a OpenGL version issue, but it works fine on macOS.
Configuration (required)
UTM Version: UTM.HV v4.0.5
OS Version: iPadOS 15.0.2
Device Model: iPad Pro (11-inch) (3rd generation)
Is it jailbroken (name jailbreak used)? No
How did you install UTM? TrollStore
Crash log
UTM-2022-09-20-114303.log
Debug log
debug.log
I encountered the same issue with Ubuntu desktop
Appears to be mesa issue. Can you try a lower version? https://gitlab.freedesktop.org/mesa/mesa/-/issues/7520
|
gharchive/issue
| 2022-09-20T04:01:40 |
2025-04-01T06:40:49.824103
|
{
"authors": [
"GreatHaop",
"TkzcM",
"osy"
],
"repo": "utmapp/UTM",
"url": "https://github.com/utmapp/UTM/issues/4433",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1553892253
|
Option for switching guest to fullscreen has disappeared
I used to be able to switch guests to fullscreen by pressing fn-f or an option from the pulldown menu. This seems to have disappeared recently. Is this a bug or a feature?
Configuration
UTM Version: 4.1.5 (74)
macOS Version: Ventura 13.1
Mac Chip (Intel, M1, ...): M1
I have just noticed that the green traffic light button actually does this (now).
|
gharchive/issue
| 2023-01-23T21:55:25 |
2025-04-01T06:40:49.826045
|
{
"authors": [
"mbert"
],
"repo": "utmapp/UTM",
"url": "https://github.com/utmapp/UTM/issues/4964",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1592331049
|
UTM gl acceleration red artifacts, blank apps
Describe the issue
In a linux guest all -gl, gpu accelerated displays show red artifacts during animations and some applications, like firefox, don't render at all and are transparent. X11 environments are much worse, with the desktop rendering as all black save for window edges. I've tried all the solutions for the mesa 22.1.7 bug with varying outcomes. Downgrading mesa to 22.1.7 mostly works, but puts me into dependency nightmares. Applying the patch from the main mesa post doesn't produce different behavior. Switching to non gpu accelerated displays behaves normally, but obviously performance is much worse. The same issues appear across multiple debian based distros and arch based distros. There don't appear to be any errors in dmesg or journalctl.
Thanks in advance for any help.
Guest Configuration
Arch
Desktop env: KDE Plasma 5.27.0 (wayland) (gnome acts similarly, x11 versions of both either crash or just show black screens)
Kernel: 5.19.8-1-aarch64-ARCH
mesa: mesa 22.3.4 (with virgl UTM patch)
UTM Configuration
UTM Version: 4.1.5
macOS Version: 13.2.1
Mac Chip: M2
Debug log
Running: -L /Applications/UTM.app/Contents/Resources/qemu -S -spice "unix=on,addr=/Users//Library/Group Containers/WDNLXAD4W8.com.utmapp.UTM/0372C6D6-00DF-4C2C-9A11-B9C5EB062593.spice,disable-ticketing=on,image-compression=off,playback-compression=off,streaming-video=off,gl=on" -chardev spiceport,id=org.qemu.monitor.qmp,name=org.qemu.monitor.qmp.0 -mon chardev=org.qemu.monitor.qmp,mode=control -nodefaults -vga none -device virtio-net-pci,mac=86:C6:A1:80:A5:8C,netdev=net0 -netdev vmnet-bridged,id=net0,ifname=en0 -device virtio-ramfb-gl -cpu host -smp cpus=8,sockets=1,cores=8,threads=1 -machine virt -accel hvf -drive if=pflash,format=raw,unit=0,file=/Applications/UTM.app/Contents/Resources/qemu/edk2-aarch64-code.fd,readonly=on -drive "if=pflash,unit=1,file=/Users//Library/Containers/com.utmapp.UTM/Data/Documents/archboot eos.utm/Data/efi_vars.fd" -m 8192 -device intel-hda -device hda-duplex -device nec-usb-xhci,id=usb-bus -device usb-tablet,bus=usb-bus.0 -device usb-mouse,bus=usb-bus.0 -device usb-kbd,bus=usb-bus.0 -device qemu-xhci,id=usb-controller-0 -chardev spicevmc,name=usbredir,id=usbredirchardev0 -device usb-redir,chardev=usbredirchardev0,id=usbredirdev0,bus=usb-controller-0.0 -chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=usb-controller-0.0 -chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2,bus=usb-controller-0.0 -device virtio-blk-pci,drive=driveA19C4308-2037-428E-B469-E9B250807EBF,bootindex=0 -drive "if=none,media=disk,id=driveA19C4308-2037-428E-B469-E9B250807EBF,file=/Users//Library/Containers/com.utmapp.UTM/Data/Documents/archboot eos.utm/Data/A19C4308-2037-428E-B469-E9B250807EBF.qcow2,discard=unmap,detect-zeroes=unmap" -device virtio-serial -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev spicevmc,id=vdagent,debug=0,name=vdagent -name "archboot eos" -uuid 0372C6D6-00DF-4C2C-9A11-B9C5EB062593 -device virtio-rng-pci
qemu-aarch64-softmmu: warning: Spice: playback:0 (0x1218da140): setsockopt failed, Operation not supported on socket
qemu-aarch64-softmmu: warning: Spice: record:0 (0x1218da1e0): setsockopt failed, Operation not supported on socket
gl_version 30 - es profile enabled
WARNING: running without ARB/KHR robustness in place may crash
vrend_renderer_fill_caps: Entering with stale GL error: 1280
GLSL feature level 130
mismatch in number of interps 1 1
GL error reported (1280) for context 4
Unfortunately I have no idea why the patch doesn't work for a lot of people. When I have time I'll try to see what the problem is. Other than that there isn't really a solution, sorry.
There were a couple things that came to mind that might be useful.
Parallels is able to use virgl without needing to install special drivers. If that could be a useful lead to follow I can attach and compare the relevant info.
This might be a more general bug with mesa + virgl. Consumer Nvidia GPUs usually don't work with virgl, but I was able to trick it in KVM/QEMU by messing with some egl devices. I haven't been able to reliably reproduce it, but if I do enough switching between windows on the guest and host the red artifacts occasionally appear on the guest. Who knows, maybe this will be an easy way to reproduce an elusive mesa/virgl bug.
Unfortunately my knowledge of systems and graphics is limited, so if there is more specific information I could provide or things to test please let me know.
Same error here. For the sake of replicating the bug, the Archlinux-tweak-tol gives a black screen and doesn't work. X11 doesn't seem to even work, but at least wayland gives signal.
Guest config: same as OP, but I didn't apply any external patch
Host config: MacOS 13.1 Mac Chip: Intel core i5
|
gharchive/issue
| 2023-02-20T19:05:05 |
2025-04-01T06:40:49.838654
|
{
"authors": [
"ktprograms",
"sbritorodr",
"thedustinmiller"
],
"repo": "utmapp/UTM",
"url": "https://github.com/utmapp/UTM/issues/5043",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
241958453
|
2.6.6在sketch 45.1 中不能用
无法使用,操作没反应,使用的是最新版本 measure 2.6.6,装了个2.6.4可以使用
你安装后有没有重启 Sketch ?
昨天试了两次,都是用sketch 中的自动更新,重启后也不行,今天我重新下载安装,重启后可以了
|
gharchive/issue
| 2017-07-11T07:53:26 |
2025-04-01T06:40:49.840237
|
{
"authors": [
"hanuse",
"utom"
],
"repo": "utom/sketch-measure",
"url": "https://github.com/utom/sketch-measure/issues/332",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1868580374
|
conda install conflicting issue
Hello,
Thanks a lot for the awesome tool, I am not sure if it's just me, but I am even stuck at the first step trying to use conda to install the RF2 dependencies based on the yml file, The issue is always finding conflicts for packages across different conda channels (see full log txt file attached). I found a similar issue in stack overflow (https://stackoverflow.com/questions/58216917/create-conda-environment-found-conflicts-when-solving-environment-and-findi), which they indicate either it is a conda version issue or it is better to rewrite the yml file. I tried two different conda versions on Linux, and they all encountered the same issue.
24031436_install.txt
24030912_install.txt
Has anyone else encountered the same problem here? Would it be possible to provide another yml file with a more explicit build version when exporting from the dev environment?
Thanks so much,
Frank
As an update, the examination run by conda suggests it might be a conflict for the GNU C library associated with the conda version installed on my cluster, not sure if it is the main issue but just post it here in case helpful
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.17=0
- feature:/linux-64::__unix==0=0
- feature:|@/linux-64::__glibc==2.17=0
- feature:|@/linux-64::__unix==0=0
- bioconda::hhsuite -> libgcc-ng[version='>=10.3.0'] -> __glibc[version='>=2.17']
- pyg::pyg -> pytorch=1.13 -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
- pytorch -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
- pytorch -> sympy -> __unix
Your installed version is: 2.17
|
gharchive/issue
| 2023-08-27T18:00:14 |
2025-04-01T06:40:49.852518
|
{
"authors": [
"frankligy"
],
"repo": "uw-ipd/RoseTTAFold2",
"url": "https://github.com/uw-ipd/RoseTTAFold2/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
427942442
|
Fix autocomplete issue
by encapsulating input in an internal field instead of using of using
straight SearchModel
Coverage increased (+0.02%) to 82.986% when pulling e42acc9a058356ed50b7f28f6a51700ef676aff5 on fix-autocomplete-search into 2a46f534b7db27da32aadef6384a975e629f0db2 on develop.
|
gharchive/pull-request
| 2019-04-01T22:38:50 |
2025-04-01T06:40:49.863687
|
{
"authors": [
"coveralls",
"maximede"
],
"repo": "uw-it-edm/content-services-ui",
"url": "https://github.com/uw-it-edm/content-services-ui/pull/503",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
160285562
|
Separate test dependencies
This change moves the python modules that are only required for testing Shared Services into their own section in setup.py so they are no longer installed by default.
Relevant files (eg .travis.yml) have also been updated.
Please let me know if I should move any other modules into this new section.
Very nice. Can you please also update the README instructions?
I would think these packages are also only needed for tests:
swagger_spec_validator
Flask-WebTest
|
gharchive/pull-request
| 2016-06-14T21:14:40 |
2025-04-01T06:40:49.889745
|
{
"authors": [
"ivan-c",
"pbugni"
],
"repo": "uwcirg/true_nth_usa_portal",
"url": "https://github.com/uwcirg/true_nth_usa_portal/pull/35",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
864396361
|
Improve Hackweek Team Page formatting
populating the "who we are" hackweek page with everyone's picture, links, etc. Do we get everyone to dump content into a file then someone on our side codes it up? Do we delegate to someone who's savvy with GitHub on the client's team?
https://uwhackweek.github.io/jupyterbook-template/team.html
Is currently populated from a markdown file here https://raw.githubusercontent.com/uwhackweek/jupyterbook-template/main/book/team.md
It would be nice to make it super easy to add people and have a better HTML layout, maybe using the "panels" feature https://jupyterbook.org/content/content-blocks.html#panels
This could be a nice way to automate the contributors / team list:
https://allcontributors.org/docs/en/overview
@all-contributors add @scottyhq for eventOrganizing
@all-contributors add @aarendt, @lsetiawan, @janekoh1 for eventOrganizing
@all-contributors add @aaarendt, @lsetiawan, @janekoh1 for eventOrganizing
Not sure if multiple commands per comment will work...
@all-contributors add @aaarendt for eventOrganizing, ideas, content
@all-contributors add @lsetiawan for eventOrganizing, ideas, content
@all-contributors add @janekoh1 for eventOrganizing, ideas
@all-contributors add @aaarendt for eventOrganizing, ideas, content
@all-contributors add @aaarendt for eventOrganizing, ideas, content
Try adding without using github ping:
@all-contributors add lsetiawan for eventOrganizing, ideas, content
If PR closed without merging, try again
@all-contributors add lsetiawan for code, ideas, content
closed by #16
|
gharchive/issue
| 2021-04-21T23:58:59 |
2025-04-01T06:40:49.896157
|
{
"authors": [
"scottyhq"
],
"repo": "uwhackweek/jupyterbook-template",
"url": "https://github.com/uwhackweek/jupyterbook-template/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
950170172
|
Skip 'update-ca-certificates' run if the certs are updated automatically
In SLES15SP3, the directory /etc/pki/trust/anchors is watched via the ca-certificates.path systemd unit, and as soon as its contents changed, the system-wide certificates are updated automatically. In certs state, we run update-ca-certificates right after deploying certs. These two processes create a race condition that in rare cases leads to an error.
This PR updates the certs state to run update-ca-certificates only if this systemd unit does not exist or is not running.
See https://bugzilla.suse.com/1188500
@cbbayburt beware this PR was missing two other calls to update-ca-certificates.
The testsuite error is caused by the minion/testsuite.sls only, but I'd expect the other state could cause the same problem when it's used (normally the testsuite doesn't use this state) so I'll update it as well. But I don't think we need to update bashrc at this moment. Even if we do, we need a different solution for that one.
The testsuite error is caused by the minion/testsuite.sls only, but I'd expect the other state could cause the same problem when it's used (normally the testsuite doesn't use this state) so I'll update it as well. But I don't think we need to update bashrc at this moment. Even if we do, we need a different solution for that one.
ok.
Rather than testing the presence of a service, i would rather simply test sle15 sp2 grain and lower. Admitting this is needed.
Rather than testing the presence of a service, i would rather simply test sle15 sp2 grain and lower. Admitting this is needed.
To give a bit more context: This issue also affects SUSE Manager production environment. We run a similar state during minion onboarding. Starting from that point, we discussed this in the dev team, and decided this would be the most natural solution.
The other reasonable option is of course filtering by distribution. But this wouldn't be as accurate as the other, simply because we don't know which other distributions/versions will support the auto-update or not. They might backport it, they might drop it in a future version, etc. but checking for the service is guaranteed to find out if the auto-update will happen.
I'm currently testing this with both SLES12 and SLES15 machines. When done, I'll open a new PR with the fix.
|
gharchive/pull-request
| 2021-07-21T23:01:39 |
2025-04-01T06:40:49.921566
|
{
"authors": [
"Bischoff",
"cbbayburt"
],
"repo": "uyuni-project/sumaform",
"url": "https://github.com/uyuni-project/sumaform/pull/926",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1099934427
|
[ WARN] [1641968634.644978988]: Camera calibration file /home/zfl/.ros/camera_info/DAVIS-00000554.yaml not found.
Dear master, I encountered the same problem as the web page https://giters.com/uzh-rpg/rpg_dvs_ros/issues/117.
But I find that /opt/inivation/boost/ has been instead of /opt/boost-inivation/. The erroe is still displayed as follows:
Camera calibration file /home/zfl/.ros/camera_info/DAVIS-00000554.yaml not found.
And I try uninstall boost-inivation, then I can't connnect to zhe DAVIS 346.
Can you help me ? All this are done on ubuntu 18.04.
Hi, I am also getting the same error. I have tried the solution given in the web page but it did not work.
Is there anything that I can try? I am working on Ubuntu 20.04 by the way.
Actually I met the same problem, that is because you do not have a well-formed YAML file, all you need is to fix your calibration result into a suitable form like this.
///
image_width: 346
image_height: 260
camera_name: DAVIS
camera_matrix:
rows: 3
cols: 3
data: [274.6631, 0, 168.47, 0, 274.59, 141.8423, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [-0.4043, 0.2296, 0.0003377, 0.000391, -0.080294]
rectification_matrix:
rows: 3
cols: 3
data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
rows: 3
cols: 4
data: [1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0]
///
Do not forget to move it to the right path like "/home/xxx/.ros/camera_info" by the way.
Hi, I am also getting the same error. I have tried the solution given in the web page but it did not work.
Is there anything that I can try? I am working on Ubuntu 20.04 by the way.
Actually I met the same problem, that is because you do not have a well-formed YAML file, all you need is to fix your calibration result into a suitable form like this.
///
image_width: 346
image_height: 260
camera_name: DAVIS
camera_matrix:
rows: 3
cols: 3
data: [274.6631, 0, 168.47, 0, 274.59, 141.8423, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [-0.4043, 0.2296, 0.0003377, 0.000391, -0.080294]
rectification_matrix:
rows: 3
cols: 3
data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
rows: 3
cols: 4
data: [1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0]
///
Do not forget to move it to the right path like "/home/xxx/.ros/camera_info" by the way.
|
gharchive/issue
| 2022-01-12T06:33:37 |
2025-04-01T06:40:49.943424
|
{
"authors": [
"987428377",
"OnurSelim",
"ZhdsGithub"
],
"repo": "uzh-rpg/rpg_dvs_ros",
"url": "https://github.com/uzh-rpg/rpg_dvs_ros/issues/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
333603343
|
RTU framer: two extra Bytes for FramerState.ReadingHeader
(https://github.com/gregorschatz/pymodbus3/commit/417820a8045be7cc47a57cf7300fd41311dc2e16 found back in 2015)
Maybe it was just a worse solution for #4 "Defect in Modbus RTU handling" but, it solves header processing in ReadingHeader state of Framer, not confusing with ReadingContent state.
Ang again Modbus RTU-Framing:
crc was checked but result was not used to determine defect frames
broken RTU-Frames could infinite break RTU-Framing (by unhandled by Exceptions)
test_scripts.zip
|
gharchive/pull-request
| 2018-06-19T10:04:32 |
2025-04-01T06:40:49.945884
|
{
"authors": [
"gregorschatz"
],
"repo": "uzumaxy/pymodbus3",
"url": "https://github.com/uzumaxy/pymodbus3/pull/12",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1966938730
|
HTTPUpgrade transport still requires support from the CDN provider to pass WebSocket
v2ray v5.10.1 of has introduced a new HTTPUpgrade transport protocol, described as:
It is a reduced version of WebSocket Transport that can pass many reverse proxies and CDNs without running a WebSocket protocol stack.
After testing, when the CDN provider's WebSocket feature is turned off, it's not possible to pass data normally because the first action of HTTPUpgrade is to request an upgrade to WebSocket. It seems that HTTPUpgrade transport cannot be used with CDN providers that do not support WebSocket, so compared to the previous WebSocket transport, HTTPUpgrade transport does not seem to have a clear advantage.
HTTPUpgrade does not require either client to server to support websocket protocol stack, thus allow them to decrease resource usage, simplify implementation, improve speed and make it possible to control packet length precisely.
It still use the same underlying mechanic of WebSocket to create a bidirectional channel, so it would works only on the platforms WebSocket works, and this is expected. Meek can pass CDN that does not support websocket, although it is kind of slow so it is not designed as a daily driver protocol.
Very interesting, I used HTTPUpgrade to send traffic through Azure CDN and it worked, but I had no success using websocket before. According to Azure documentation, Azure CDN does not support websocket. Maybe I need to continue to investigate on it
@xiaokangwang Can HTTP Upgrade be used in a 0-RTT manner, similar to WebSocket, to reduce the latency of the first packet?
@xiaokangwang Can HTTP Upgrade be used in a 0-RTT manner, similar to WebSocket, to reduce the latency of the first packet?
This is possible but currently unimplemented.
|
gharchive/issue
| 2023-10-29T10:51:22 |
2025-04-01T06:40:49.958543
|
{
"authors": [
"xiaokangwang",
"ytai-chn",
"zhu327",
"zirconpl"
],
"repo": "v2fly/v2ray-core",
"url": "https://github.com/v2fly/v2ray-core/issues/2734",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
530974227
|
sniffing 的一个 Bug
在使用 BifrostV 代理迅雷时,开启传入探测(sniffing)后,迅雷无法下载,在关闭传入探测(sniffing)后,迅雷正常下载
这个问题在 v2rayNG 上也同样出现了
查看日志,发现嗅探出 res.res.res.res 这个域名,但经过上网查询,发现没有这个后缀的域名
应该是 sniffing 嗅探出了问题
吸血雷别用,qbittorrent多好用,https://www.johnrosen1.com/qbt/
吸血雷别用,qbittorrent多好用,https://www.johnrosen1.com/qbt/
内网环境我只能找到迅雷了,你说的那个要公网的
吸血雷别用,qbittorrent多好用,https://www.johnrosen1.com/qbt/
内网环境我只能找到迅雷了,你说的那个要公网的
no,内网只是速度比公网慢而已,大不了去要个公网ip,又不是什么难事。
关闭
|
gharchive/issue
| 2019-12-02T08:39:34 |
2025-04-01T06:40:49.962073
|
{
"authors": [
"FH0",
"johnrosen1"
],
"repo": "v2ray/v2ray-core",
"url": "https://github.com/v2ray/v2ray-core/issues/2075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
578423573
|
v2ray不能访问网络
提交 Issue 之前请先阅读 Issue 指引,然后回答下面的问题,谢谢。
除非特殊情况,请完整填写所有问题。不按模板发的 issue 将直接被关闭。
如果你遇到的问题不是 V2Ray 的 bug,比如你不清楚要如何配置,请使用Discussion进行讨论。
你正在使用哪个版本的 V2Ray?(如果服务器和客户端使用了不同版本,请注明)
2.1.0
你的使用场景是什么?比如使用 Chrome 通过 Socks/VMess 代理观看 YouTube 视频。
通过v2rayU的vmess代理访问谷歌
你看到的不正常的现象是什么?(请描述具体现象,比如访问超时,TLS 证书错误等)
谷歌访问失败
你期待看到的正确表现是怎样的?
手机端同样配置能访问谷歌
请附上你的配置(提交 Issue 前请隐藏服务器端IP地址)。
客户端配置:
{
"log": {
"error": "",
"loglevel": "info",
"access": ""
},
"inbounds": [
{
"listen": "127.0.0.1",
"protocol": "socks",
"settings": {
"udp": false,
"auth": "noauth"
},
"port": "1081"
},
{
"listen": "127.0.0.1",
"protocol": "http",
"settings": {
"timeout": 360
},
"port": "1088"
}
],
"outbounds": [
{
"mux": {
"enabled": false,
"concurrency": 8
},
"protocol": "vmess",
"streamSettings": {
"wsSettings": {
"path": "/serv",
"headers": {
"host": "us.nettt.xyz"
}
},
"tlsSettings": {
"allowInsecure": true
},
"security": "tls",
"network": "ws"
},
"tag": "proxy",
"settings": {
"vnext": [
{
"address": "us.nettt.xyz",
"users": [
{
"id": "DF074072-2D7A-653A-4FF2-750DB83789A8",
"alterId": 1,
"level": 0,
"security": "aes-128-gcm"
}
],
"port": 443
}
]
}
},
{
"tag": "direct",
"protocol": "freedom",
"settings": {
"domainStrategy": "AsIs",
"redirect": "",
"userLevel": 0
}
},
{
"tag": "block",
"protocol": "blackhole",
"settings": {
"response": {
"type": "none"
}
}
}
],
"dns": {},
"routing": {
"settings": {
"domainStrategy": "AsIs",
"rules": []
}
},
"transport": {}
}
请附上出错时软件输出的错误日志。在 Linux 中,日志通常在 /var/log/v2ray/error.log 文件中。
rocess outbound traffic > v2ray.com/core/proxy/vmess/outbound: failed to find an available destination > v2ray.com/core/common/retry: [v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: lookup us.nettt.xyz on 192.168.3.1:53: read udp 192.168.0.113:60725->192.168.3.1:53: i/o timeout v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: i/o timeout v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: lookup us.nettt.xyz on 192.168.3.1:53: read udp 192.168.0.113:51958->192.168.3.1:53: i/o timeout v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: operation was canceled] > v2ray.com/core/common/retry: all retry attempts failed
2020/03/10 16:39:13 [Info] [4226428636] v2ray.com/core/app/proxyman/inbound: connection ends > v2ray.com/core/proxy/socks: connection ends > context canceled
2020/03/10 16:39:13 [Info] [4226428636] v2ray.com/core/transport/internet/websocket: creating connection to tcp:us.nettt.xyz:443
2020/03/10 16:39:14 [Info] [4226428636] v2ray.com/core/transport/internet/websocket: creating connection to tcp:us.nettt.xyz:443
2020/03/10 16:39:14 [Info] [4184944925] v2ray.com/core/app/proxyman/inbound: connection ends > v2ray.com/core/proxy/socks: connection ends > context canceled
2020/03/10 16:39:14 [Info] [4184944925] v2ray.com/core/transport/internet/websocket: creating connection to tcp:us.nettt.xyz:443
2020/03/10 16:39:14 [Info] [4226428636] v2ray.com/core/transport/internet/websocket: creating connection to tcp:us.nettt.xyz:443
2020/03/10 16:39:15 [Info] [4184944925] v2ray.com/core/transport/internet/websocket: creating connection to tcp:us.nettt.xyz:443
2020/03/10 16:39:15 [Info] [4184944925] v2ray.com/core/transport/internet/websocket: creating connection to tcp:us.nettt.xyz:443
2020/03/10 16:39:15 [Warning] [4226428636] v2ray.com/core/app/proxyman/outbound: failed to process outbound traffic > v2ray.com/core/proxy/vmess/outbound: failed to find an available destination > v2ray.com/core/common/retry: [v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: lookup us.nettt.xyz on 192.168.3.1:53: read udp 192.168.0.113:51958->192.168.3.1:53: i/o timeout v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: operation was canceled] > v2ray.com/core/common/retry: all retry attempts failed
2020/03/10 16:39:16 [Warning] [4184944925] v2ray.com/core/app/proxyman/outbound: failed to process outbound traffic > v2ray.com/core/proxy/vmess/outbound: failed to find an available destination > v2ray.com/core/common/retry: [v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: lookup us.nettt.xyz on 192.168.3.1:53: read udp 192.168.0.113:51958->192.168.3.1:53: i/o timeout v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: i/o timeout v2ray.com/core/transport/internet/websocket: failed to dial WebSocket > v2ray.com/core/transport/internet/websocket: failed to dial to (wss://us.nettt.xyz/serv): > dial tcp: operation was canceled] > v2ray.com/core/common/retry: all retry attempts failed
这个ip:192.168.3.1:53 如何获取的
这个ip:192.168.3.1:53 如何获取的
这是你网关作为本地DNS了吧,cat /etc/resolv.conf
cat /etc/resolv.conf
感谢就是这个问题,已经解决
|
gharchive/issue
| 2020-03-10T08:50:09 |
2025-04-01T06:40:50.002330
|
{
"authors": [
"cyssxt",
"mzz2017"
],
"repo": "v2ray/v2ray-core",
"url": "https://github.com/v2ray/v2ray-core/issues/2321",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
677204735
|
Updated to be compatible with bashio
Updated to use bashio library. (Fixing: Failed to create server: Invalid host name)
Fixed '/libasound_module_pcm_pulse.so: No such file or directory' issue.
Updated to use latest hassioaddons/base:8.0.1.
Changes were testing on a rpi3.
Hi,
I use rpi3+, Hassio Home Assistant 0.113.3
I have updated the latest addon
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
./run: line 7: /usr/lib/hassio-addons/base.sh: No such file or directory
./run: line 16: hass.debug: command not found
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.7 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Successfully called chroot().
Successfully dropped remaining capabilities.
No service file found in /etc/avahi/services.
Failed to create server: Invalid host name
avahi-daemon 0.7 exiting.
Thanks for fixing it @adymanis I don't use this addon myself anymore so I was struggling to find time for that.
|
gharchive/pull-request
| 2020-08-11T21:19:32 |
2025-04-01T06:40:50.020018
|
{
"authors": [
"adymanis",
"v3rm0n",
"vudev"
],
"repo": "v3rm0n/addon-shairport-sync",
"url": "https://github.com/v3rm0n/addon-shairport-sync/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
451795316
|
Forgot password button is not visible in bakery app
Hi,
I am trying to learn the JAVA framework. In the source code of vaadin bakery app, there is call to setForgotPasswordButtonVisible(true); in the constructor of LoginView.java. But no forgot password button is shown in the app.
Hello @sudipta1411,
If you take a look to the code before the call you mentioned, there's one i18n.setForm(new LoginI18n.Form());, which basically creates an empty object and that's why no text is being set to the i18n.form.forgotPassword property, therefore causing it to show empty on the page (you can even inspect that the button is present there).
You can either remove the line i18n.setForm(new LoginI18n.Form()); or make a call to i18n.getForm().setForgotPassword("button text");.
Ok. Thanks for replying. I have inspected that. I thought it was the default behaviour
|
gharchive/issue
| 2019-06-04T04:42:50 |
2025-04-01T06:40:50.093615
|
{
"authors": [
"DiegoCardoso",
"sudipta1411"
],
"repo": "vaadin/vaadin-login",
"url": "https://github.com/vaadin/vaadin-login/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1114205904
|
Update and verify directions for getting running on M1.
@danpopnyc
Ok, went through and verified with a clean install.
Signed-off-by: Ville Aikas vaikas@chainguard.dev
seeing an issue with the mysql image
machine is running no issues after running new steps! :)
Woot Woot, and just to double-check, the error with mysql not coming up was running with release.yaml vs. release-arm.yaml?
correct. runs perfectly after changing to release-arm.yaml
Visual confirmation
|
gharchive/pull-request
| 2022-01-25T18:06:31 |
2025-04-01T06:40:50.111259
|
{
"authors": [
"danpopnyc",
"vaikas"
],
"repo": "vaikas/sigstore-scaffolding",
"url": "https://github.com/vaikas/sigstore-scaffolding/pull/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
577411783
|
Simple loading of trees from json and refreshing.
Hi,
I've set up a minimal example where I get some json
data and initialize a jstree with it.
The data is coming from a file which is selected using
an input field, so I would like to update the tree whenever
I specify the name of a new file. My code looks as follows.
var filename = document.getElementById("filename").value;
$.getJSON(filename, function(json) {
console.log(json);
$("#tree")
.jstree({
"core" : {
"themes" : {
"stripes" : false,
"dots" : true
},
"check_callback" : true,
"data" : json
},
"plugins" : [ ]
});
var tree = $('#tree').jstree(true);
tree.refresh();
});
The problem is that it works only the first time, then it seems to not refresh.
The console.log shows that the json data is there correctly.
I just do not know how to refresh the tree with the new data.
I have searched the web and found that many people experienced the same issue
but I was not able to find a working solution.
Please help, thanks.
-Giulio
This is not jstree related, but anyway:
You have initialized the tree with some static JSON, so when you refresh it - it will still be the same. If you want to use a new JSON - either recreate the whole tree with the new JSON (instead of invoking refresh), or set core.data to a function, which in turn checks the input value, loads the file and returns the new JSON.
Thanks for the reply, but how do I recreate the tree with the new json?
Can you provide an example?
|
gharchive/issue
| 2020-03-07T22:42:21 |
2025-04-01T06:40:50.120340
|
{
"authors": [
"jermp",
"vakata"
],
"repo": "vakata/jstree",
"url": "https://github.com/vakata/jstree/issues/2368",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1821710885
|
Visibility Layers
Objective
The objective of this PR is to solve the following problems with a holistic redesign of instances.
Closes #342, #362, but deviates greatly from the solution described in the issue.
If an entity is within the view distance of a client, packets for that entity are sent to the client unconditionally. However, there is often a need to limit the visibility of entities to certain clients depending on the situation. Some examples:
You have a minigame with spectators and players. Spectators can see both players and spectators, but players can only see players and not spectators.
An entity or set of entities is used as a makeshift GUI, only visible to the client using the GUI.
The server has many separate "arenas" that have identical chunk data, and you wish to share the chunk data across arenas to save memory.
You want to make a player truly invisible and potentially have some other entity take its place.
It is possible to work around the problem by using invisibility effects or sending packets manually. But these are hacky solutions that are incomplete, inefficient, and inconvenient.
Updating clients involves looking up every chunk in the client's view distance every tick for every client. This wastes CPU time and doesn't scale well with larger view distances[^1].
Sometimes we want to broadcast packets to all clients matching some condition. Conditions could include...
If a chunk position is within the client's view.
If the client is within a certain radius of some position.
If the client is not some specific client (self exclusion).
It isn't really possible to add all of these conditions in an efficient way using the current design.
Solution
Split the existing Instance component into two new components: ChunkLayer and EntityLayer. Chunk layers contain all of the chunks in a world along with some global configuration, like the world's dimension. Entity layers contain Minecraft entities. A LayerBundle containing both is provided for convenience. Both ChunkLayer and EntityLayer implement the common Layer trait.
The key idea is this: Clients can only view one chunk layer[^2], but can view any number of entity layers. These are the VisibleChunkLayer and VisibleEntityLayers components respectively. The client will receive entity packets from only the layers it can see. Clients can add and remove entity layers at any time to spawn and despawn the entities in the layer.
Every layer contains a "message buffer" for broadcasting information to all viewers. Every message contains a "key" and a payload of bytes (its meaning depends on the message). Clients walk through the list of messages (parallelized over all clients) and use this to update themselves.
There are a few things done to make this faster:
The messages are sorted by their "key" and then deduplicated by merging payloads together. For instance, messages sending packet data on the condition that a certain chunk position is in view will be merged together if the chunk position is the same. Now there's only one memcpy instead of two.
Messages are split into two categories: "global" and "local". Local messages are those that have some spatial condition to them. Local messages are put in a bounding volume hierarchy so that large swaths of messages do not need to be examined by clients.
TODOs
[ ] More unit tests and fix bugs.
[x] Add missing docs.
[ ] Reimplement valence_anvil.
[ ] Reimplement weather module.
[ ] Reimplement valence_world_border.
[^1]: At least the work is completely parallelized ¯\_(ツ)_/¯.
[^2]: Viewing multiple chunk layers has some technical problems and I don't see it as a design worth pursuing.
I'm working on a capture the flag example to try out the API. I'll post it as a PR when it's done.
I'v made a simple example for though who want :
Playground
code
use std::f64::consts::TAU;
use glam::{DQuat, EulerRot};
use valence::client::despawn_disconnected_clients;
use valence::log::LogPlugin;
use valence::network::ConnectionMode;
use valence::prelude::*;
#[allow(unused_imports)]
use crate::extras::*;
type SpherePartBundle = valence::entity::cow::CowEntityBundle;
const SPHERE_CENTER: DVec3 = DVec3::new(0.5, SPAWN_POS.y as f64 + 2.0, 0.5);
const SPHERE_AMOUNT: usize = 200;
const SPHERE_MIN_RADIUS: f64 = 6.0;
const SPHERE_MAX_RADIUS: f64 = 12.0;
const SPHERE_FREQ: f64 = 0.5;
const SPAWN_POS: BlockPos = BlockPos::new(0, 100, -16);
/// Marker component for entities that are part of the sphere.
#[derive(Component)]
struct SpherePart;
#[derive(Component)]
struct Main;
#[derive(Component)]
struct LayerSwitcherState(bool);
#[derive(Component)]
struct LayerSwitcherPos(BlockPos);
#[derive(Bundle)]
struct LayerSwitcherBundle {
state: LayerSwitcherState,
pos: LayerSwitcherPos,
layer: EntityLayerId,
}
impl LayerSwitcherBundle {
fn new(layer: EntityLayerId, pos: BlockPos) -> Self {
Self {
state: LayerSwitcherState(false),
pos: LayerSwitcherPos(pos),
layer,
}
}
}
const LAYER_SWITCHERS_POS: [BlockPos; 4] = [
BlockPos::new(-1, SPAWN_Y, -1),
BlockPos::new(1, SPAWN_Y, -1),
BlockPos::new(-1, SPAWN_Y, 1),
BlockPos::new(1, SPAWN_Y, 1),
];
const SPAWN_Y: i32 = 64;
pub fn build_app(app: &mut App) {
app.insert_resource(NetworkSettings {
connection_mode: ConnectionMode::Offline,
..Default::default()
})
.add_plugins(DefaultPlugins.build().disable::<LogPlugin>())
.add_systems(Startup, setup)
.add_systems(EventLoopUpdate, (update_sphere, update_visible_layers))
.add_systems(Update, (init_clients, despawn_disconnected_clients))
.run();
}
fn setup(mut commands: Commands, server: Res<Server>) {
commands.spawn((EntityLayer::new(&server), Main));
// Spawn 4 EntityLayers, one for each quadrant and get there IDs.
let layer_ids = (0..4)
.map(|_| commands.spawn(EntityLayer::new(&server)).id())
.collect::<Vec<_>>();
// Setup layers switchers.
// create a bundle layer switcher with there positions.
let layer_switchers = [
LayerSwitcherBundle::new(EntityLayerId(layer_ids[0]), LAYER_SWITCHERS_POS[0]),
LayerSwitcherBundle::new(EntityLayerId(layer_ids[1]), LAYER_SWITCHERS_POS[1]),
LayerSwitcherBundle::new(EntityLayerId(layer_ids[2]), LAYER_SWITCHERS_POS[2]),
LayerSwitcherBundle::new(EntityLayerId(layer_ids[3]), LAYER_SWITCHERS_POS[3]),
];
// Spawn the layer switchers.
commands.spawn_batch(layer_switchers);
commands.spawn_batch([0; SPHERE_AMOUNT].iter().enumerate().map(move |(i, _)| {
(
SpherePartBundle {
layer: EntityLayerId(layer_ids[i % 4]),
..Default::default()
},
SpherePart,
)
}));
}
fn init_clients(
mut commands: Commands,
server: Res<Server>,
biomes: Res<BiomeRegistry>,
dimensions: Res<DimensionTypeRegistry>,
mut clients: Query<
(
&mut EntityLayerId,
&mut VisibleChunkLayer,
&mut VisibleEntityLayers,
&mut Position,
&mut GameMode,
),
Added<Client>,
>,
main_entity_layer: Query<Entity, (With<EntityLayer>, With<Main>)>,
) {
for (
mut layer_id,
mut visible_chunk_layer,
mut visible_entity_layers,
mut pos,
mut game_mode,
) in &mut clients
{
let entity_layer = main_entity_layer.single();
let mut chunk_layer = ChunkLayer::new(ident!("overworld"), &dimensions, &biomes, &server);
init_chunk_layer(&mut chunk_layer);
let chunk_layer_id = commands.spawn(chunk_layer).id();
layer_id.0 = entity_layer;
visible_chunk_layer.0 = chunk_layer_id;
visible_entity_layers.0.insert(entity_layer);
pos.set([0.0, SPAWN_Y as f64 + 1.0, 0.0]);
*game_mode = GameMode::Creative;
}
}
fn init_chunk_layer(chunk_layer: &mut ChunkLayer) {
for z in -5..5 {
for x in -5..5 {
chunk_layer.insert_chunk([x, z], UnloadedChunk::new());
}
}
for z in -25..25 {
for x in -25..25 {
chunk_layer.set_block([x, SPAWN_Y, z], BlockState::GRASS_BLOCK);
}
}
// set_block the redstone lamps.
for pos in LAYER_SWITCHERS_POS.iter() {
chunk_layer.set_block(*pos, BlockState::REDSTONE_LAMP);
}
}
// Add more systems here!
fn update_visible_layers(
mut dig_events: EventReader<DiggingEvent>,
mut clients: Query<(&mut VisibleEntityLayers, &VisibleChunkLayer)>,
mut layer_switchers: Query<(&mut LayerSwitcherState, &EntityLayerId, &LayerSwitcherPos)>,
mut chunk_layers: Query<&mut ChunkLayer>,
) {
for dig_event in dig_events.iter() {
for (mut state, layer, pos) in layer_switchers.iter_mut() {
if pos.0 == dig_event.position {
state.0 = !state.0;
if let Ok((mut layers, chunk_layer_id)) = clients.get_mut(dig_event.client) {
if let Ok(mut chunk_layer) = chunk_layers.get_mut(chunk_layer_id.0) {
chunk_layer.set_block(
pos.0,
BlockState::REDSTONE_LAMP.set(
PropName::Lit,
if state.0 {
PropValue::True
} else {
PropValue::False
},
),
);
}
if state.0 {
layers.0.insert(layer.0);
} else {
layers.0.remove(&layer.0);
}
}
}
}
}
}
fn update_sphere(
settings: Res<CoreSettings>,
server: Res<Server>,
mut parts: Query<(&mut Position, &mut Look, &mut HeadYaw), With<SpherePart>>,
) {
let time = server.current_tick() as f64 / settings.tick_rate.get() as f64;
let rot_angles = DVec3::new(0.2, 0.4, 0.6) * SPHERE_FREQ * time * TAU % TAU;
let rot = DQuat::from_euler(EulerRot::XYZ, rot_angles.x, rot_angles.y, rot_angles.z);
let radius = lerp(
SPHERE_MIN_RADIUS,
SPHERE_MAX_RADIUS,
((time * SPHERE_FREQ * TAU).sin() + 1.0) / 2.0,
);
for ((mut pos, mut look, mut head_yaw), p) in
parts.iter_mut().zip(fibonacci_spiral(SPHERE_AMOUNT))
{
debug_assert!(p.is_normalized());
let dir = rot * p;
pos.0 = SPHERE_CENTER + dir * radius;
look.set_vec(dir.as_vec3());
head_yaw.0 = look.yaw;
}
}
/// Distributes N points on the surface of a unit sphere.
fn fibonacci_spiral(n: usize) -> impl Iterator<Item = DVec3> {
let golden_ratio = (1.0 + 5_f64.sqrt()) / 2.0;
(0..n).map(move |i| {
// Map to unit square
let x = i as f64 / golden_ratio % 1.0;
let y = i as f64 / n as f64;
// Map from unit square to unit sphere.
let theta = x * TAU;
let phi = (1.0 - 2.0 * y).acos();
DVec3::new(theta.cos() * phi.sin(), theta.sin() * phi.sin(), phi.cos())
})
}
fn lerp(a: f64, b: f64, t: f64) -> f64 {
a * (1.0 - t) + b * t
}
@Bafbi I'm unable to reproduce the desync you're seeing using your example. I've repeatedly hit the lamps in different ways and everything seems OK so far. What steps did you do to arrive at the screenshot?
@rj00a ngl, I have no idea, I couldn't reproduce it either but I'm sure it was with this code.
Something I would also really like to have is the possibility to put directly the Entity of a client in the EntityLayerId component, that would mean that only this client can see this entity. This would be much more practical than to having to manage a layer for each client.
I put what I have so far for the ctf exmaple in #426.
Currently, there is a bug where new players on a team are unable to see existing players. I don't think I'm doing anything too weird in the team joining system, so I'm inclined to think it could be a bug in this PR.
Also, it would be nice if we could make cloning entities to other layers a bit easier. Something where it clones the entity's appearance as it currently is, and then only updates position, rotation, etc automatically.
Something I would also really like to have is the possibility to put directly the Entity of a client in the EntityLayerId component
If you insert an EntityLayer component on the client itself, you will get exactly the behavior you're looking for.
Also, it would be nice if we could make cloning entities to other layers a bit easier. Something where it clones the entity's appearance as it currently is, and then only updates position, rotation, etc automatically.
The flecs ECS library has a feature that lets you inherit components from other entities. It uses a special relationship the query engine understands, called IsA. Components from the child entity take precedence over the parent entity. https://ajmmertens.medium.com/a-roadmap-to-entity-relationships-5b1d11ebb4eb#cc02
The bevy devs have plans to add entity relationships to bevy_ecs so we might be getting this feature in the near future. Once it's in, I don't think there's anything special we need to do on our end to make this work.
The visibility bug that's happening in the ctf example is happening because the packets to create the new entity with the same uuid arrive before the destroy packets for the old ones.
The visibility bug that's happening in the ctf example is happening because the packets to create the new entity with the same uuid arrive before the destroy packets for the old ones.
Should be fixed now. This also fixed the issue where cows were disappearing in the cow sphere example 😃
Just rebased the ctf example, I can confirm it's fixed.
Alright, this should be ready to go.
@dyc3 If you'd like to merge the ctf example we can do that now.
Hmm mysterious CI failures.
The CI failures might be related to Testing many_players. Maybe it's getting OOM killed?
|
gharchive/pull-request
| 2023-07-26T06:55:20 |
2025-04-01T06:40:50.149804
|
{
"authors": [
"Bafbi",
"dyc3",
"rj00a"
],
"repo": "valence-rs/valence",
"url": "https://github.com/valence-rs/valence/pull/424",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2536949265
|
Python client has missing exports
Describe the bug
Some symbols aren't exported, so not available for a user
Expected Behavior
.
Current Behavior
.
Reproduction Steps
.
Possible Solution
No response
Additional Information/Context
No response
Client version used
1.1
Engine type and version
N/A
OS
N/A
Language
Python
Language Version
N/A
Cluster information
No response
Logs
No response
Other information
No response
@Yury-Fridlyand please verify the list of exports required for the Python release. We need to define the list of Symbols required for the release in this issue. Symbols like the BaseClient and BaseTransaction are not required.
There must be a better way to indicate that than run tests.
Any ideas? Please post in the PR.
What about generating a list of exported Symbols from all *.py files (with the exception of protobuf) and comparing with the __init__.py list. It's possible that we're missing Symbols in the IT tests.
cc: @ikolomi
That's almost what I did semi-manually. I used imports in tests as a reference.
Is it possible to use import from glide and import from glide.file?
The test can't use glide as a complete module?
@Yury-Fridlyand
can you attach the output of that test in to a txt and attach it here?
BitFieldGet
BitFieldIncrBy
BitFieldOverflow
BitFieldSet
BitmapIndexType
BitOffset
BitOffsetMultiplier
BitOverflowControl
BitwiseOperation
OffsetOptions
SignedEncoding
UnsignedEncoding
ClosingError
RequestError
Script
Limit
ListDirection
ConditionalChange
ExpireOptions
ExpiryGetEx
ExpirySet
ExpiryType
ExpiryTypeGetEx
FlushMode
FunctionRestorePolicy
InfBound
InfoSection
InsertPosition
UpdateOptions
OrderBy
AggregationType
GeoSearchByBox
GeoSearchByRadius
GeoSearchCount
GeospatialData
GeoUnit
InfBound
LexBoundary
RangeByIndex
RangeByLex
RangeByScore
ScoreBoundary
ScoreFilter
ExclusiveIdBound
IdBound
MaxId
MinId
StreamAddOptions
StreamClaimOptions
StreamGroupOptions
StreamPendingOptions
StreamReadGroupOptions
StreamReadOptions
TrimByMaxLen
TrimByMinId
ClusterTransaction
Transaction
GlideClientConfiguration
GlideClusterClientConfiguration
ProtocolVersion
ServerCredentials
OK
TimeoutError
TEncodable
TFunctionStatsSingleNodeResponse
TResult
GlideClient
GlideClusterClient
TGlideClient
AllNodes
AllPrimaries
ByAddressRoute
RandomNode
Route
SlotIdRoute
SlotKeyRoute
SlotType
BaseClientConfiguration
GlideClusterClientConfiguration
NodeAddress
PeriodicChecksManualInterval
PeriodicChecksStatus
ReadFrom
CoreCommands
GlideClientConfiguration
GlideClusterClientConfiguration
ProtocolVersion
OK
TEncodable
ConfigurationError
BaseClient
GlideClient
GlideClusterClient
TGlideClient
ClusterScanCursor
ObjectType
ProtocolVersion
RequestError
GlideClient
GlideClusterClient
BitFieldGet
BitFieldSet
BitmapIndexType
BitOffset
BitOffsetMultiplier
BitwiseOperation
OffsetOptions
SignedEncoding
UnsignedEncoding
Limit
ListDirection
OrderBy
ExpiryGetEx
ExpiryTypeGetEx
FlushMode
FunctionRestorePolicy
InsertPosition
AggregationType
GeoSearchByBox
GeoSearchByRadius
GeospatialData
GeoUnit
InfBound
LexBoundary
OrderBy
RangeByIndex
ScoreBoundary
ScoreFilter
IdBound
MaxId
MinId
StreamAddOptions
StreamClaimOptions
StreamGroupOptions
StreamReadGroupOptions
TrimByMinId
BaseTransaction
ClusterTransaction
Transaction
ProtocolVersion
OK
TResult
TSingleNodeRoute
SlotIdRoute
SlotType
GlideClient
GlideClusterClient
TGlideClient
Level
Logger
what about json.py? We aren't exporting any commands from json.py
This is a superset of all (i believe) symbols that the python wrapper has. Each symbols should be examined and exported if required by the interface:
`(.env) ikolomin@u2d2e243c3f8c57:~/valkey-glide/python/python/glide$ python print_symbols.py
Symbol Table for: ./routes.py
Scope: module - top
Symbols: ['SlotType', 'Route', 'AllNodes', 'AllPrimaries', 'RandomNode', 'SlotKeyRoute', 'SlotIdRoute', 'ByAddressRoute']
Symbol Table for: ./glide_client.py
Scope: module - top
Symbols: ['BaseClient', 'GlideClusterClient', 'GlideClient', 'TGlideClient']
Symbol Table for: ./protobuf_codec.py
Scope: module - top
Symbols: ['ProtobufCodec', 'PartialMessageException', 'Exception']
Symbol Table for: ./exceptions.py
Scope: module - top
Symbols: ['GlideError', 'Exception', 'ClosingError', 'RequestError', 'TimeoutError', 'ExecAbortError', 'ConnectionError', 'ConfigurationError']
Symbol Table for: ./logger.py
Scope: module - top
Symbols: ['Level', 'Logger']
Symbol Table for: ./init.py
Scope: module - top
Symbols: ['PubSubMsg']
Symbol Table for: ./constants.py
Scope: module - top
Symbols: ['OK', 'DEFAULT_READ_BYTES_SIZE', 'T', 'TOK', 'TResult', 'TRequest', 'TClusterResponse', 'TSingleNodeRoute', 'TJsonResponse', 'TEncodable', 'TFunctionListResponse', 'TFunctionStatsSingleNodeResponse', 'TFunctionStatsFullResponse', 'TXInfoStreamResponse', 'TXInfoStreamFullResponse']
Symbol Table for: ./config.py
Scope: module - top
Symbols: ['NodeAddress', 'ReadFrom', 'ProtocolVersion', 'BackoffStrategy', 'ServerCredentials', 'PeriodicChecksManualInterval', 'PeriodicChecksStatus', 'BaseClientConfiguration', 'GlideClientConfiguration', 'GlideClusterClientConfiguration']
Symbol Table for: ./async_commands/transaction.py
Scope: module - top
Symbols: ['TTransaction', 'BaseTransaction', 'Transaction', 'ClusterTransaction']
Symbol Table for: ./async_commands/stream.py
Scope: module - top
Symbols: ['StreamTrimOptions', 'TrimByMinId', 'TrimByMaxLen', 'StreamAddOptions', 'StreamRangeBound', 'MinId', 'MaxId', 'IdBound', 'ExclusiveIdBound', 'StreamReadOptions', 'StreamGroupOptions', 'StreamReadGroupOptions', 'StreamPendingOptions', 'StreamClaimOptions']
Symbol Table for: ./async_commands/cluster_commands.py
Scope: module - top
Symbols: ['ClusterCommands']
Symbol Table for: ./async_commands/bitmap.py
Scope: module - top
Symbols: ['BitmapIndexType', 'OffsetOptions', 'BitwiseOperation', 'BitEncoding', 'SignedEncoding', 'UnsignedEncoding', 'BitFieldOffset', 'BitOffset', 'BitOffsetMultiplier', 'BitFieldSubCommands', 'BitFieldGet', 'BitFieldSet', 'BitFieldIncrBy', 'BitOverflowControl', 'BitFieldOverflow']
Symbol Table for: ./async_commands/init.py
Scope: module - top
Symbols: []
Symbol Table for: ./async_commands/sorted_set.py
Scope: module - top
Symbols: ['InfBound', 'AggregationType', 'ScoreFilter', 'ScoreBoundary', 'LexBoundary', 'RangeByIndex', 'RangeByScore', 'RangeByLex', 'GeospatialData', 'GeoUnit', 'GeoSearchByRadius', 'GeoSearchByBox', 'GeoSearchCount']
Symbol Table for: ./async_commands/standalone_commands.py
Scope: module - top
Symbols: ['StandaloneCommands']
Symbol Table for: ./async_commands/command_args.py
Scope: module - top
Symbols: ['Limit', 'OrderBy', 'ListDirection', 'ObjectType']
Symbol Table for: ./async_commands/core.py
Scope: module - top
Symbols: ['ConditionalChange', 'ExpiryType', 'ExpiryTypeGetEx', 'InfoSection', 'ExpireOptions', 'UpdateOptions', 'ExpirySet', 'ExpiryGetEx', 'InsertPosition', 'FlushMode', 'FunctionRestorePolicy', 'CoreCommands']
Symbol Table for: ./async_commands/server_modules/json.py
Scope: module - top
Symbols: ['JsonGetOptions']`
Produced by the following code:
`import os
import fnmatch
import symtable
Function to print only module-level symbols, excluding those starting with '_', lowercase letters, or imported symbols
def print_module_symbols(table):
if table.get_type() == 'module':
# Filter symbols to exclude those starting with _ or lowercase letters, and imports
symbols = [
s.get_name() for s in table.get_symbols()
if not (s.get_name().startswith('_') or s.get_name()[0].islower() or s.is_imported())
]
print(f"Scope: {table.get_type()} - {table.get_name()}")
print(f"Symbols: {symbols}")
Function to process a single Python file
def process_python_file(file_path):
with open(file_path, 'r') as f:
code = f.read()
try:
# Create a symbol table for the file
symbol_table = symtable.symtable(code, file_path, 'exec')
# Print only the module-level symbol table
print(f"\nSymbol Table for: {file_path}")
print_module_symbols(symbol_table)
except Exception as e:
print(f"Error processing {file_path}: {e}")
Get the path of the running script
current_file = os.path.abspath(file)
Walk through the current directory and subdirectories, ignoring .env, protobuf, and the running file
for root, dirs, files in os.walk('.', topdown=True):
# Exclude the .env and protobuf directories
dirs[:] = [d for d in dirs if d not in ['.env', 'protobuf']]
# Process each Python file (*.py), excluding the running script
for file_name in fnmatch.filter(files, '*.py'):
file_path = os.path.join(root, file_name)
# Ignore the current running file
if os.path.abspath(file_path) == current_file:
continue
process_python_file(file_path)
`
I manually cleaned output from @ikolomi's script and feed into my script, so those are missing in python/python/glide/__init__.py:
BaseClient
BaseTransaction
ClusterCommands
DEFAULT_READ_BYTES_SIZE
Exception
JsonGetOptions
Level
Logger
ObjectType
PartialMessageException
ProtobufCodec
Route
StandaloneCommands
T
TClusterResponse
TEncodable
TFunctionListResponse
TFunctionStatsFullResponse
TFunctionStatsSingleNodeResponse
TGlideClient
TJsonResponse
TOK
TRequest
TResult
TSingleNodeRoute
TTransaction
TXInfoStreamFullResponse
TXInfoStreamResponse
My PR includes most of these, but I intentionally skipped json-related things. Should I include them?
|
gharchive/issue
| 2024-09-19T17:13:57 |
2025-04-01T06:40:50.229690
|
{
"authors": [
"Yury-Fridlyand",
"acarbonetto",
"avifenesh",
"ikolomi"
],
"repo": "valkey-io/valkey-glide",
"url": "https://github.com/valkey-io/valkey-glide/issues/2332",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1872279150
|
🛑 PMANG is down
In 8544260, PMANG (https://www.pmang.jp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PMANG is back up in cd1c0e1 after 1 hour, 50 minutes.
|
gharchive/issue
| 2023-08-29T19:31:52 |
2025-04-01T06:40:50.236741
|
{
"authors": [
"valofejikim"
],
"repo": "valofe/upptime",
"url": "https://github.com/valofe/upptime/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
819912845
|
The app crashes when the user taps on the “Moonpay” link on the “Add Funds” screen.
Frequency: 100%
Repro on build version: Android Internal Build V1.11.0(1004294337)
Does not repro version Test Flight Build V1.11.0(47)
Repro on devices: Samsung Galaxy A5(7.0), Samsung Galaxy J7 pro (8.0)
Does not repro devices iPhone XR(14.4)
Testing Account: (+49) 4976 78 / Backup Key
Pre-condition: Users should login with Germany Country phone number.
Repro Steps:
Launch the application & login with a test account.
Now tap on Hamburger Menu & tap on “Add & Withdraw” option.
Now go to the “Add Funds” section & select “Celo dollar(cUSD)” from Select digital Currency field & “Debit card or bank account” from Pay with field.
Now enter $20 in the amount field & tap on Next button.
Now tap on the Moonpay link.
Observed that,
Current Behavior: The app crashes & the user gets blocked to complete the transaction.
Expected Behavior: Users should be able to proceed transactions without any crash.
Impact: User gets blocked to complete the transaction.
Investigation: Also we have observed crashes when a user taps on the “Ramp” link on Add Funds screen.
Attachment: CrashOnMoonpay.mp4
Crash Log: celo_logs .txt
The above ticket is duplicate of https://github.com/valora-inc/wallet/issues/70
|
gharchive/issue
| 2021-03-02T11:09:54 |
2025-04-01T06:40:50.245145
|
{
"authors": [
"ValoraQA"
],
"repo": "valora-inc/wallet",
"url": "https://github.com/valora-inc/wallet/issues/78",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1328444835
|
Load CLI config as a part of the context object
Is your feature request related to a problem? Please describe.
We use glocal CLI config on so many places so Instead of calling a load function everywhere we load and store the global CLI config as a part of the context object at the beginning
Describe the solution you'd like
Load and store the CLI config as object on Context class
This is a general refactor idea - closing as it is better discussed in a wider context
|
gharchive/issue
| 2022-08-04T10:55:46 |
2025-04-01T06:40:50.246822
|
{
"authors": [
"DavidMinarsch",
"angrybayblade"
],
"repo": "valory-xyz/open-aea",
"url": "https://github.com/valory-xyz/open-aea/issues/256",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
610793799
|
Detectron2 to Caffe
@zhiqwang
Why did you include the detectron2 tag in this repo
Have you converted detectron2 instance segmentation into caffe or onnx
Hi, @deepseek
Why did you include the detectron2 tag in this repo
I'm working in embedding detectron2 to this repo.
Have you converted detectron2 instance segmentation into caffe or onnx
Now I can convert ssd and its series model from pytorch to caffe using brocolli, but the pytorch version of inisis's 0.4.0, which is incompatible in most environment, I want to upgrade it to pytorch 1.3+ in this repo.
Thank!
|
gharchive/issue
| 2020-05-01T14:57:56 |
2025-04-01T06:40:50.270357
|
{
"authors": [
"deepseek",
"zhiqwang"
],
"repo": "vanillapi/demonet",
"url": "https://github.com/vanillapi/demonet/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2575776922
|
Time reactive ui #9
created a night_time.html with a moon which when u hover over, winks at you
redirects to this page between 2am-5am and back to index.html after 5am
@vansh-codes
@vansh-codes i made some changes, is it possible to revert the merge?
i think it'll be better if i change the website to the moon winking when someone clicks the contact button during a given time
or do you like the way it is now? the whole website will change during the given time in the current code
Would be better if you create new PR
|
gharchive/pull-request
| 2024-10-09T12:23:55 |
2025-04-01T06:40:50.296311
|
{
"authors": [
"hana-maria",
"vansh-codes"
],
"repo": "vansh-codes/ChaosWeb",
"url": "https://github.com/vansh-codes/ChaosWeb/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
829655875
|
Update MicrosoftRouter endpoint
A recent PR added a few www prefixes to some providers endpoints; in the case of Microsoft, it breaks it.
This PR remove the invalid www from the Microsoft oAuth login host.
Hi, any chance to get this merged soon?
|
gharchive/pull-request
| 2021-03-12T00:59:19 |
2025-04-01T06:40:50.300699
|
{
"authors": [
"m-barthelemy"
],
"repo": "vapor-community/Imperial",
"url": "https://github.com/vapor-community/Imperial/pull/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
433115703
|
Ksync watch crashes quite often after few minutes run
I was running ksync watch in foreground and it keeps crashing and have to manually restart it.
Crash is always due to some connection error that happens.
ERRO[0299] lost connection to cluster ContainerName=
LocalPath=/home/ubuntu/www/topaz LocalReadOnly=false Name=on-rodent Namespace=local Pod= Reload=false RemotePath=/topaz RemoteReadOnly=false Selector="[version=ad429a7-20190415-052641]"
Please suggest what could be the reason?
Can we add some auto restart in case of connection failures instead of aborts that happen. We can abort it when there have been n restarts due to connection issues in certain time or we should leave this for the user to manage by some other systemd or supervisor process?
Looks like you have a shaky connection to your cluster. The connection code has to maintain sockets and does do re-tries before failing. As you saw, cycling the process recreates the connection, so placing it under process management should give you more resiliency in this case.
I see the time to crash after a exact interval. Is it because of some configuration issue in Kubernetes. Something automatically times out and closes connection? Like kubectl exec has a timeout which kicks you out of shell.
I am in AWS, provisioned the cluster using kops.
Hmm, good question. It would be hard to say what it is exactly without looking at your setup.
Happening in many in our company. Any way i can print some error in code and find out the reason for loss? Anyway to debug?
That makes sense, you may want to look at some of the other issues/docs that address this. TL;DR is that you might want to use a process supervisor.
Related to the final point here https://github.com/vapor-ware/ksync#troubleshooting
@timfallmk @grampelberg Around 30 developers are using ksync for local development. And every one of them have complained about the crash. We should really find the root cause and fix it.
ERRO[0505] lost connection to cluster ContainerName= LocalPath=/Users/alok87/code/practo/heimdall LocalReadOnly=true Name=heimdall Namespace=bug Pod= Reload=false RemotePath=/www/app RemoteReadOnly=false Selector="[product=heimdall]"
I'd bet that there's something related to keep-alive and how you've got the connection configured to your remote cluster. ksync isn't doing anything special with that connection, so maybe some looking around at client-go watches getting timed out would help?
The easiest solution to this is what @timfallmk suggests - just have the OS restart watch. It was kinda meant to do that to begin with. If you're on OSX, watch can be added to launchctl or systemd if you're on linux.
|
gharchive/issue
| 2019-04-15T06:33:29 |
2025-04-01T06:40:50.306510
|
{
"authors": [
"alok87",
"grampelberg",
"timfallmk"
],
"repo": "vapor-ware/ksync",
"url": "https://github.com/vapor-ware/ksync/issues/285",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
417377101
|
JobData naming collision warning
We should provide a console warning when multiple jobs are registered with the same JobData name to help prevent collisions during decoding.
Fixed in #8
|
gharchive/issue
| 2019-03-05T16:12:19 |
2025-04-01T06:40:50.317567
|
{
"authors": [
"mcdappdev"
],
"repo": "vapor/jobs",
"url": "https://github.com/vapor/jobs/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2195867870
|
acecore DM support
acecore cant be used in DMs yet. has to be fixed also checking if the guild id is present and if not disable a few commands in DMs
maybe also adding a bool to the command structure if it should even be registered with the empty string that makes it global (also DMs then) but if not it could make the command registering a lot faster eliminating rate limits
|
gharchive/issue
| 2024-03-19T19:53:40 |
2025-04-01T06:40:50.333227
|
{
"authors": [
"vaporvee"
],
"repo": "vaporvee/acecore",
"url": "https://github.com/vaporvee/acecore/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
729135940
|
Frames are being dropped due to low threshold.
Looking to improve the data points we get after processing thresholds.
Looks like some frames are lost due to threshold being small.
Also, trying to see if we can improve on accuracy. Not sure if this is a issue. Would be good to benchmark the outcome.
@vardanagarwal - Let me know your thoughts on how we can proceed. I do see an accuracy issue as well.
Looking at 2 use cases.
Still video
Video with predetermined movement
I have tried changing the
thresh = cv2.erode(thresh, None, iterations=20)
thresh = cv2.dilate(thresh, None, iterations=20)
thresh = cv2.medianBlur(thresh, 3)
As I increase the iterations, I do see that I get a lot more data points. The accuracy seem to be looked at.
I guess we can create some videos and manually annotate them first. This would make the benchmarking process much easier. Then we can find some metrics like MIOU with different processing operations to find the best one.
For the thresholding portion, the first thing I will do is to separate the thresholds for the left and right eye as if the lighting is on one side then it highly impacts it. The next thing we can do to automate the thresholds is to use some type of calibration function which would check various thresholds and find the most suitable one.
Any ideas on how to collaborate better? I am in PST.
I am sorry I don't know the full form of PST. I have added a video in the folder eye_tracker. Along with that, you can find its annotations having points for the center of the eyeballs.
Regarding collaboration, do you have any ideas with which we can move forward.
My bad. PST stands for Pacific Standard Time. I will look at the sample video you have added.
The video looks great but looks like it has too many variations. I can create a PR with a simpler video with no head tilt and just simple eye movements.
Okay, that will work.
@vardanagarwal - Please take a look at this - https://github.com/ctippur/Proctoring-AI/tree/master/eye_tracking. If it is ok, I can create a PR.
Yeah it is okay.
PR - https://github.com/vardanagarwal/Proctoring-AI/pull/26
Feel free to merge. I have added some rough benchmarking that we can try and validate.
@vardanagarwal Thanks for merging.
Here is what I did so far.
Changed to read from file
cap = cv2.VideoCapture("eye_tracking/center_left_center.mp4")
Rotate the image after reading
frame_count=0
frame_max_count=int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
right_x=[]
right_y=[]
left_x=[]
left_y=[]
while(frame_count < 10):
ret, img = cap.read()
if ret is False:
img=cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
Changed contouring to return cx and cy
Observation:
Interestingly, process_thresh seem to be processing be frames that what I have input. I am controlling the frames to restrict to just 10. I seem to be getting 20 cx's
S
Observation#2:
if I remove ```cv2.createTrackbar('threshold', 'image', 75, 255, nothing)
```countouring ``` returns None.
@vardanagarwal let me know if we can go on a call or webex (I can set it up)
Yeah sure! We can do that. I'll explain the code to you as well of how it working at the moment.
@vardanagarwal I have tried to reach out to you on LinkedIn. Hope I reached the right person.
please upload the requirements file for this project, and noted in readme file " what python version in used "
I am unable to get this project to run perfectly. Can anybody give me the steps to run ?
|
gharchive/issue
| 2020-10-25T23:04:09 |
2025-04-01T06:40:50.345964
|
{
"authors": [
"Havish123",
"ctippur",
"rushabh-wadkar",
"vardanagarwal"
],
"repo": "vardanagarwal/Proctoring-AI",
"url": "https://github.com/vardanagarwal/Proctoring-AI/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1157965131
|
Search bar bug
Issue reference:
Search Bug Filter issue resolved. #11
Proposed changes:
The Search Bar was previously working if you type all lower case which is not the expected behavior. I have fixed 2 issues:
Filter products irrespective of case.
Filter products if the search string is a substring of the product's name.
Type of change:
[ ] Bug fix (non-breaking change which fixes an issue)
Checklist:
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my own code
[ ] I have commented on my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
Current working behavior.
Why did you update the react-scripts dependencies 🤔 Is there any problem starting the dev-server?
I can see you added some new lines to the components and files. Now two things:
You should follow the existing code style.
Why are you doing changes in the files where it is not required? The files src/components/Contact/index.js and src/components/context/filter_context.js have nothing to do with the actual filter logic. The only file that was meant to be changed was the filter_reducer.js
Also try to comment only on the parts which are hard to understand. Array.includes() is very common and doesn't need to be commented 👍
Now,
Remove the extra lines in the above files I mentioned.
Remove the comment you added in the reducer.
Also, If there is a problem with the dependencies, create a new issue. For now, just revert that 👍.
Okay, I do the required changes I did update the react-scripts because in my local machine it was not working. Ok, I do as instructed.
What was the issue?
The local server was failing to run.
Wait then, let me check.
Should I make the changes that you asked for and again commit ??
wait for now, let me check the issue 👍
Hmm, since react-scripts@5 is not causing any breaking change, we can have that, no need to change.
Also I have tested the PR, works fine 👍
Just make the other changes 👍
Okay
Hey, I have made the commits. Have a look and thanks for the opportunity
@sagnik2001 comments bhi to hatana tha 🙂
Done, sorry missed out that previously thanks
Great 🎉, Thanks for your contribution @sagnik2001
Stay tuned, more issues will be added. Also, if you find anything that could be improved, create an issue 👍
Also, the backend and the admin panel will soon be added, I'll be happy to see you contributing to them too 😅
You can join the discord channel also (finally we have it now 😅)
Happy contributing 🥳
|
gharchive/pull-request
| 2022-03-03T04:46:52 |
2025-04-01T06:40:50.360096
|
{
"authors": [
"sagnik2001",
"varunKT001"
],
"repo": "varunKT001/tomper-wear-ecommerce",
"url": "https://github.com/varunKT001/tomper-wear-ecommerce/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1810635195
|
npm ERR! Missing script: "frontend-build-all"
RazorSvelte.csproj in the Carbon UI template
<PreBuildEvent>npm run frontend-build-all --color=always</PreBuildEvent>
should be changed to
<PreBuildEvent>npm run fe-build-all --color=always</PreBuildEvent>
I don't maintain the Carbon UI template anymore. It's just an example. The master branch contains a modified Bootstrap template which I do maintain.
|
gharchive/issue
| 2023-07-18T20:06:47 |
2025-04-01T06:40:50.432956
|
{
"authors": [
"Saad5400",
"vbilopav"
],
"repo": "vb-consulting/RazorSvelte",
"url": "https://github.com/vb-consulting/RazorSvelte/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
405369230
|
Замечание по аудиту
https://github.com/vbondarevsky/Connector/blob/1a8e8bccfe4457efd6ea0f9876431485751543c4/src/CommonModules/КоннекторHTTP/Ext/Module.bsl#L456
До выполнения метода Вычислить надо устанавливать безопасный режим всегда, иначе в кластере с включенными профилями безопасности будет выброшено исключение.
@vbondarevsky без условного вызова будет ошибка платформенной проверки на обращении к модулю, которого не существует в конфигурациях без БСП.
Проще УстновитьБезопасныйРежим(Истина) перед Вычислить поставить.
Читы)
Но выглядит прикольно.
|
gharchive/issue
| 2019-01-31T18:00:42 |
2025-04-01T06:40:50.437117
|
{
"authors": [
"zeegin"
],
"repo": "vbondarevsky/Connector",
"url": "https://github.com/vbondarevsky/Connector/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1674385721
|
Select number of jobs provided to Vivado
BUILD_JOBS = ?
Fixed in d6370abaee8dc87af609b6b81e5cbcb5d8b1c11f
|
gharchive/issue
| 2023-04-19T07:57:15 |
2025-04-01T06:40:50.438375
|
{
"authors": [
"vborchsh"
],
"repo": "vborchsh/make-fpga",
"url": "https://github.com/vborchsh/make-fpga/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
348326867
|
Upgrading PHP to 7.1 / 7.2
Is there a preferred way to upgrade the environment to php 7.1 or php 7.2?
Currently php 7.0 is bundled with the box. According to http://php.net/supported-versions.php - 7.1 ends active support on Jan 1, 2019 which isn't terribly far away.
Thanks
+1
@lzivadinovic
Thanks for your clarification.
We will update it. (I am very happy if you can send PR. 😊 )
@miya0001 Its not a problem for me to write pre-step where php version could be read from site.yml and box configured according to selected version. What i dont like in that approach is that user first downloads vccw box with already installed php7.0 (https://github.com/vccw-team/vccw-xenial64/blob/master/provision/playbook.yml#L62 ) and then modifies that installation. Also, i really dont see the point in using pre-packed box at all it kills modularity if you want to, for example, change php version. Also its just a hassle to update box when bento/ubuntu-16.04 box is updated, you could simply use ubuntu/xenial64\bionic64 as base and do a provisioning here (only downside i see there is longer up time, by i guess no more than 2 minutes :) ) But its your approach.
If its ok for you to change php version while provisioning later on (i.e. setting up php version in site.yml in this repo) i could send you PR and i would be glad to contribute to project. Proposed soluion is following:
Add new environmental variable for selecting desired version of php from ondrej
Install and reconfigure php with appropriate php.ini for that specific version of php
if version of php is left at standard php7.0 i could just skip initial php install/configure role
Tell me if you agree on proposed solution and ill send you a PR in few days.
@Izivadinovic @miya0001 @harkor @jgraup
Guys, but how do we set what PHP to use in the default.yml ? before we hit vagrant reload --provision?
That's the place to set it right?
using this doesint work,
composers:
phpunit/phpunit:7.3
squizlabs/php_codesniffer:~2.0
wp-coding-standards/wpcs:*
it just makes version like this:
7.0.26-2+ubuntu16.04.1+deb.sury.org+2
Try to see this solution if it works.
https://github.com/vccw-team/change-php-version
according to following article:
https://qiita.com/miya0001/items/2499917d7ec3bc905781
the solution is to execute following command on guest:
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
and create following file on "Vagrantfile" directory:
provision-post.sh
#! /usr/bin/env bash
set -ex
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
@harkor
If I update with apt-get install, use 7.2 instead of 7.0, restart apache, the mailcatcher doesn't work anymore.
You need to change the 'sendmail_path' as follows.
/etc/php/7.2/apache2/php.ini
sendmail_path = /usr/bin/env catchmail -f some@from.address
and
sudo service apache2 restart
according to following article:
https://qiita.com/miya0001/items/2499917d7ec3bc905781
the solution is to execute following command on guest:
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
and create following file on "Vagrantfile" directory:
provision-post.sh
#! /usr/bin/env bash
set -ex
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
Hi @amaguri1505 Just wondering if you know about what have happened to the Xdebug after updating the PHP. Because in my case, there is no Xdebug installed and I am having a hard time trying to figure out how to reconfigure the settings for it to work. If you have to know, do you mind sharing with me any helpful resources? Thank you!
I know this is old but I really need to figure out how to update php version so newer themes will run locally.
@litone01 in your post above you say to run that execute that curl command, from my vagrant dir?
then I create the provision-post.sh file in my vagrant directory as well? Then run vagrant provision?
Sorry a little new to vagrant boxes so I yu could over explain the answer I might be able to get it to work.
I know this is old but I really need to figure out how to update php version so newer themes will run locally.
@litone01 in your post above you say to run that execute that curl command, from my vagrant dir? then I create the provision-post.sh file in my vagrant directory as well? Then run vagrant provision?
Sorry a little new to vagrant boxes so I yu could over explain the answer I might be able to get it to work.
@brianmdesigns Nope it is not my post (I am just quoting the previous reply) :( It has been a long time and I have not been using PHP and wordpress for some time, so I cant really recall the solution I adapted at that time. Maybe you can use google translation and follow this link https://qiita.com/miya0001/items/2499917d7ec3bc905781, which is also from the discussion above.
@brianmdesigns
Do you want to use PHP versions?
I just want to know of a way to load the latest php version
On Tue, Jan 25, 2022, 8:50 PM Takashi Hosoya @.***>
wrote:
@brianmdesigns https://github.com/brianmdesigns
Do you want to use PHP versions?
—
Reply to this email directly, view it on GitHub
https://github.com/vccw-team/vccw/issues/333#issuecomment-1021779307,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AO6H5IM6ABM3UUJ7WTRCPLDUX5HPDANCNFSM4FOIXQYA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
@brianmdesigns
@litone01
The current version of VCCW does not seem to be able to upgrade PHP.
This is because the version of Ubuntu os is old.
I think we need to upgrade Ubuntu os.
Hmmm ok will that be happening or do we just pick a new box. I like how
VCCW has one config file with all the variables. I use VVV too but I just
enjoy the flow of VCCW a little better. So is there any way to add PHP
variables in the site.yml and maybe update the Ubuntu box?
On Tue, Jan 25, 2022 at 11:40 PM Takashi Hosoya @.***>
wrote:
@brianmdesigns https://github.com/brianmdesigns
@litone01 https://github.com/litone01
The current version of VCCW does not seem to be able to upgrade PHP.
This is because the version of Ubuntu os is old.
I think we need to upgrade Ubuntu os.
—
Reply to this email directly, view it on GitHub
https://github.com/vccw-team/vccw/issues/333#issuecomment-1021855191,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AO6H5IL2HROMSSSSGZLRTV3UX53K5ANCNFSM4FOIXQYA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
@brianmdesigns
I have tried the following box files.
Unfortunately, if we change the box file, we will also need to change the playbook configuration files.
It will be a very big change and take time.
https://app.vagrantup.com/generic/boxes/ubuntu1804
https://app.vagrantup.com/ubuntu/boxes/bionic64
https://app.vagrantup.com/giusetavera/boxes/wplamp
You can use PHP7.4 if you do the following.
The following tasks are installed PPA( Personal Package Archive ) at your own risk.
vagrant up
vagrant ssh
sudo add-apt-repository ppa:jczaplicki/xenial-php74-temp
sudo apt-get update
https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.4
@brianmdesigns
it just seems like VCCW is going to be obsolete then
if the PHP version isnt updated.
Yes, I think so too.
But VCCW is running on the Virtualbox.
Virtualbox does not support Apple silicon Mac.
In order to continue using VCCW, someone needs to change from Virtualbox to Docker.
Currently it is not scheduled.
|
gharchive/issue
| 2018-08-07T13:42:28 |
2025-04-01T06:40:50.466889
|
{
"authors": [
"IanZea",
"amaguri1505",
"brianmdesigns",
"harkor",
"jgraup",
"jonaspaq",
"litone01",
"lzivadinovic",
"miya0001",
"tkc49"
],
"repo": "vccw-team/vccw",
"url": "https://github.com/vccw-team/vccw/issues/333",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.