identifier
stringlengths
1
43
dataset
stringclasses
3 values
question
stringclasses
4 values
rank
int64
0
99
url
stringlengths
14
1.88k
read_more_link
stringclasses
1 value
language
stringclasses
1 value
title
stringlengths
0
200
top_image
stringlengths
0
125k
meta_img
stringlengths
0
125k
images
listlengths
0
18.2k
movies
listlengths
0
484
keywords
listlengths
0
0
meta_keywords
listlengths
1
48.5k
tags
null
authors
listlengths
0
10
publish_date
stringlengths
19
32
summary
stringclasses
1 value
meta_description
stringlengths
0
258k
meta_lang
stringclasses
68 values
meta_favicon
stringlengths
0
20.2k
meta_site_name
stringlengths
0
641
canonical_link
stringlengths
9
1.88k
text
stringlengths
0
100k
8585
dbpedia
0
3
https://docs.veracode.com/r/About_auto_packaging
en
About auto-packaging
https://docs.veracode.co…code-favicon.png
https://docs.veracode.co…code-favicon.png
[ "https://docs.veracode.com/img/Veracode_Docs_Logo_Light_Mode.svg", "https://docs.veracode.com/img/Veracode_Docs_Logo_Dark_Mode.svg" ]
[]
[]
[ "" ]
null
[]
2024-08-08T19:40:14+00:00
Veracode auto-packaging automates the process of packaging your projects for Static Analysis and Software Composition Analysis (SCA) upload and scan. By automating packaging, you can reduce the burden on your teams to correctly package projects manually, while also ensuring more accurate and consistent scan results.
en
/img/veracode-favicon.png
https://docs.veracode.com/r/About_auto_packaging
Veracode auto-packaging automates the process of packaging your projects for Static Analysis and Software Composition Analysis (SCA) upload and scan. By automating packaging, you can reduce the burden on your teams to correctly package projects manually, while also ensuring more accurate and consistent scan results. Saves time and effort, compared to manual packaging, by eliminating manual steps, such as gathering files and dependencies, configuring build settings, and packaging artifacts. Ensures a consistent build process across different environments and platforms. This reduces the risk of discrepancies or errors that can occur when developers manually change the build configurations or there are variations across the configurations. Reduces human errors that can occur when developers package projects manually. This improves the accuracy and reliability of the generated artifacts, which ensures that the Static Analysis results are accurate. Enables scalability by facilitating the rapid and efficient generation of artifacts for analysis across multiple code repositories, projects, or teams. This scalability is essential for organizations managing large and complex codebases. Reduces the time and resources developers spend securing their code, which allows them to focus on writing new code, implementing features, or addressing critical issues. Developers can increase their productivity and accelerate the time-to-market for software products and updates. The auto-packager runs on your repository to package your projects into artifacts (archive files) that you can upload to the Veracode Platform. To correctly package a project for Static Analysis or SCA upload and scan, the auto-packager automatically detects the required components and configurations for each supported language. The auto-packager packages your projects into archive files, such as ZIP, JAR, WAR or EAR, called artifacts. During the packaging process, the auto-packager might create multiple artifacts that it includes in the final artifacts. For example, multiple DLL files inside the final ZIP file. The final artifacts are the complete, packaged archive files that you can upload to Veracode and scan separately. The following table lists examples of the filename format of the final artifacts for each supported language. Artifact languageLanguage tagLanguage suffix tagExample filename.NET assembliesdotnetNoneveracode-auto-pack-Web-dotnet.zip.NET with JavaScriptdotnetjsveracode-auto-pack-Web-dotnet-js.zipAndroidNoneNoneThe gradle.build file defines the filenames of Java artifacts.COBOLcobolNoneveracode-auto-pack-EnterpriseCOBOLv6.3-cobol.zipC/C++ Linuxc_cppNoneveracode-auto-pack-CppProjectLibsAndExecutables-c_cpp.zipC/C++ WindowsmsvcNoneveracode-auto-pack-$(SolutionName)-msvc.zipDart and FlutterNoneNoneThe project configuration for Flutter Android or Xcode defines the filenames.GogoNoneveracode-auto-pack-evil-app-go.zipiOS with Xarchiveiosxcarchiveveracode-auto-pack-duckduckgo-ios-xcarchive.zipiOS with CocoaPodsiospodfileveracode-auto-pack-signal-ios-podfile.zipJava with GradleNoneNoneDefined by your gradle.build file.Java with MavenNoneNoneDefined by your pom.xml file.JavaScriptjsNoneveracode-auto-pack-NodeGoat-js.zipKotlinNoneNoneThe filenames of Java artifacts are defined by your gradle.build file.PerlperlNoneveracode-auto-pack-bugzilla-perl.zipPHPphpNoneveracode-auto-pack-captainhook-php.zipPythonpythonNoneveracode-auto-pack-dvsa-python.zipReact NativejsNoneveracode-auto-pack-convene-js.zipRubyrubyNoneveracode-auto-pack-railsgoat-ruby.zipScalaNoneNoneThe filenames of Java artifacts are defined by your SBT build properties. Auto-packaging is integrated with the following products: Veracode CLI to integrate auto-packaging in your development environment. Veracode GitHub Workflow Integration to automate repo scanning with GitHub Actions. The auto-packager only supports Java, JavaScript, Python, Go, Scala, Kotlin, React Native, and Android repositories. Veracode Azure DevOps Workflow Integration to automate repo scanning using user's pipelines. The auto-packager supports Java, .NET, JavaScript, Python, Go, Kotlin, and React Native projects. Veracode Scan for JetBrains to auto-package applications, scan, and remediate findings in JetBrains IDEs. Veracode Scan for VS Code to auto-package applications, scan, and remediate findings in VS Code. You can integrate the auto-packager with your local build environment or CI/CD. For example, to add auto-packaging to your build pipelines, you could add the CLI command veracode package to your development toolchains or build scripts. You might need to install one or more of the following tools in your environment: A build automation tool that defines build scripts or configurations that specify how to manage dependencies, compile source code, and package code as artifacts. A dependency management system to effectively handle project dependencies. A compiler that builds source code into executable code. If the auto-packager does not support specific versions, or it relies on a version supported by your packager manager, the Versions column shows Not applicable. LanguageVersionsPackage managers.NET.NET 6, 7, or 8. .NET Framework 4.6 - 4.8. Not supported: MAUIAllAndroidA JDK version that you have tested to build your project.GradleCOBOLCOBOL-74, COBOL-85, COBOL-2002Not ApplicableC/C++ LinuxCentOS and Red Hat Enterprise 5-9, openSUSE 10-15Not ApplicableC/C++ WindowsC/C++ (32-bit/64-bit)Not ApplicableDart and FlutterDart 3.3 and earlier / Flutter 3.19 and earlierPubGo1.14 - 1.22Go ModulesiOSNot applicableAllJava (select from the Package managers column)A JDK version that you have tested to build your project.Gradle, MavenJavaScript and TypeScriptNot applicableNPM, YarnKotlinA JDK version that you have tested to build your project.Gradle, MavenPerl5.xNot ApplicablePHPNot applicableComposerPythonNot applicablePip, Pipenv, setuptools, virtualenvReact NativeNot applicableNPM, Yarn, BowerRuby on RailsRuby 2.4 or greaterBundlerScalaA JDK version that you have tested to build your project.Gradle, Maven, sbt Under each supported language, the Veracode CLI commands and output examples demonstrate the packaging process when you run the veracode package command. You can use the auto-packager with various integrations, but the CLI output examples help you visualize the packaging process. All examples assume the location of the CLI executable is in your PATH. You might see different output in your environment. Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A supported version of .NET. PATH environment variable that points to the dotnet or msbuild command. Your projects must: Contain at least one syntactically correct .csproj file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Recursively searches your repo for all .csproj submodules. To publish an SDK-style project, runs the following command: dotnet publish -c Debug -p:UseAppHost=false -p:SatelliteResourceLanguages='en' -p:WasmEnableWebcil=false -p:BlazorEnableCompression=false To publish a .NET Framework project, runs a command similar to the following: msbuild Project.csproj /p:TargetFrameworkVersion=v4.5.2 /p:WebPublishMethod="FileSystem" /p:PublishProvider=FileSystem /p:LastUsedBuildConfiguration=Debug /p:LastUsedPlatform=Any CPU /p:SiteUrlToLaunchAfterPublish=false /p:LaunchSiteAfterPublish=false /p:ExcludeApp_Data=true /p:PrecompileBeforePublish=true /p:DeleteExistingFiles=true /p:EnableUpdateable=false /p:DebugSymbols=true /p:WDPMergeOption="CreateSeparateAssembly" /p:UseFixedNames=true /p:UseMerge=false /p:DeployOnBuild=true Filters out any test projects. Packages the published project and saves the artifacts of your packaged project in the specified --output location. veracode package --source path/to/project/bobs-used-bookstore-sample --output verascan --trust Packager initiated... Verifying source project language ... Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Data'. Publish successful. Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Web'. Publish successful. Project Bookstore.Web zipped and saved to: path\to\verascan\veracode-auto-pack-Bookstore.Web-dotnet.zip DotNet project Bookstore.Web JavaScript packaged to: path\to\verascan\veracode-auto-pack-Bookstore.Web-dotnet-js.zip Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Cdk'. Publish successful. Project Bookstore.Cdk zipped and saved to: path\to\verascan\veracode-auto-pack-Bookstore.Cdk-dotnet.zip Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Domain'. Publish successful. Successfully created 3 artifact(s). Created DotNet artifacts for DotNetPackager project. Total time taken to complete command: 11.656s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct Java or Kotlin version present in the environment for packaging the application. Correct Android SDK version present in the environment for packaging the application. Other dependencies installed based on the repository dependency. The auto-packager completes the following steps, as shown in the example command output. To build a Gradle project, runs the command gradlew clean build -x test Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/sunflower --output verascan --trust Packaging code for project sunflowe. Please wait; this may take a while... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/app-benchmark.apk. Copied artifact: path/to/verascan/app-debug.apk. Copied artifact: path/to/verascan/macrobenchmark-benchmark.apk. Successfully created 3 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 1m35.117s Before you can run the auto-packager, you must meet the following requirements: Your COBOL programs must be in UTF-8 encoded files with one of the following extensions: .cob, .cbl, .cobol, or .pco. Your COBOL copybooks must be in UTF-8 encoded .cpy files. Veracode recommends you include all copybooks to generate the best scan results. The auto-packager completes the following steps, as shown in the example command output. Finds all the files matching the required extensions and packages them in a ZIP archive (artifact). Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/EnterpriseCOBOLv6.3 --output verascan --trust Packaging code for project EnterpriseCOBOLv6.3. Please wait; this may take a while... Verifying source project language ... [GenericPackagerCobol] Packaging succeeded for the path path/to/project/EnterpriseCOBOLv6.3 Successfully created 1 artifact(s). Created Cobol artifacts for GenericPackagerCobol project. Total time taken to complete command: 3.802s Before you can run the auto-packager, you must meet the following requirements: All project files and libraries have been compiled with debug information defined in the packaging guidelines. Auto-packaging must run on supported Linux OS architecture and distribution. For efficient packaging, all binaries and libraries have been collected in a single folder. The auto-packager completes the following steps, as shown in the example command output. Detects a Veracode-supported Linux OS architecture. If it does not detect a supported architecture, the auto-packager throws an error and exits packaging. Detects a Veracode-supported Linux OS distribution. Searches the prebuilt binary directory to find scan-supported binary files, then archives them in a single artifact. veracode package --source path/to/project/CppProjectLibsAndExecutables --output verascan --trust Packaging code for project CppProjectLibsAndExecutables. Please wait; this may take a while... Verifying source project language ... C/CPP project CppProjectLibsAndExecutables packaged to: /path/to/verascan/veracode-auto-pack-CppProjectLibsAndExecutables-c_cpp.zip Successfully created 1 artifact(s). Created CPlusPlus artifacts for GenericPackagerCPP project. Total time taken to complete command: 37.257s Before you can run the auto-packager, you must meet the following requirements: The project must contain at least one .sln file that is configured to build at least one supported C++ project. A supported C++ project is defined by a .vcxproj file where the following are true: Defines a supported project configuration: Targets a supported platform (x64 or Win32) Builds a supported binary (ConfigurationType is Application or DynamicLibrary) Is not a test Native Unit Test project or Google Unit Test project. msbuild command is available in the environment. Code can compile without errors. The auto-packager completes the following steps, as shown in the example command output. Searches the project directories to find supported .sln files. The search stops at each directory level where it finds supported files. For each .sln file found: Determines the solution configuration to use to build the top-level projects. If available, it uses the first solution configuration listed in the solution that has a supported project platform for a top-level C++ project, configured as a debug build. Determines the supported top-level C++ projects for that solution configuration. A top-level C++ project is a C++ project that is not a dependency of any other project configured to build for that solution configuration. Builds each supported top-level C++ project using compiler and linker settings required for Veracode to analyze Windows C/C++ applications: <ItemDefinitionGroup> <ClCompile> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> <Optimization>Disabled</Optimization> <BasicRuntimeChecks>Default</BasicRuntimeChecks> <BufferSecurityCheck>false</BufferSecurityCheck> </ClCompile> <Link> <LinkIncremental>false</LinkIncremental> <GenerateDebugInformation>true</GenerateDebugInformation> <ProgramDatabaseFile>$(OutDir)$(TargetName).pdb</ProgramDatabaseFile> </Link> </ItemDefinitionGroup> Creates an archive for each solution named veracode-auto-pack-$(SolutionName)-msvc.zip. Each archive contains a $(ProjectName) directory with all .exe, .dll, and .pdb build artifacts for each top-level project build target of the solution. veracode package --source path/to/project/example-cpp-windows --output verascan --trust Packaging code for project example-cpp-windows. Please wait; this may take a while... Verifying source project language ... Packaging Windows C/C++ artifacts for WinCppPackager publish path 'C:\Users\...\AppData\Local\Temp\2766238912731991934'. MSBuild commands successfully completed. Windows solution WS_AllSource packaged to: path\to\verascan\veracode-auto-pack-WS_AllSource-msvc.zip Packaging Windows C/C++ artifacts for WinCppPackager publish path 'C:\Users\...\AppData\Local\Temp\7662002083651398436'. MSBuild commands successfully completed. Windows solution allPepPCIF packaged to: path\to\verascan\veracode-auto-pack-allPepPCIF-msvc.zip Successfully created 2 artifact(s). Created Windows C/C++ artifacts for WinCppPackager project. Total time taken to complete command: 3m38.473s Before you can run the auto-packager, you must meet the following requirements: To ensure that Flutter installs successfully and validates all platform tools, successfully run flutter doctor. To generate an iOS Archive file, the project must be able to run the command: flutter build ipa --debug To generate an Android APK file, the project must be able to run the command: flutter build apk --debug The auto-packager completes the following steps, as shown in the example command output. Gathers APK and IPA files. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/flutter-wonderous-app --output verascan --trust Packaging code for project flutter-wonderous-app. Please wait; this may take a while... Verifying source project language ... Copying artifacts for Dart Flutter for FlutterPackager project. Copied artifact: path/to/verascan/app-debug.apk. Successfully created 1 artifact(s). Created Dart artifacts for FlutterPackager project. Total time taken to complete command: 54.731s Before you can run the auto-packager, you must meet the following requirements: Your environment must have a supported version of Go. Your projects must: Support Go Modules. Contain a go.sum file and a go.mod file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package a project, including the source code and the vendor folder, runs the command go mod vendor. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/sftpgo --output verascan --trust Please ensure your project builds successfully without any errors. Packaging code for project sftpgo. Please wait; this may take a while... Verifying source project language ... Packaging GO artifacts for GoModulesPackager project 'sftpgo'. go mod vendor successful. Go project sftpgo packaged to: path/to/verascan/veracode-auto-pack-sftpgo-go.zip Successfully created 1 artifact(s). Created GoLang artifacts for GoModulesPackager project. Total time taken to complete command: 15.776s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Xcode and the xcodebuild command-line tool installed. gen-ir installed. For example: # Add the brew tap to your local machine brew tap veracode/tap # Install the tool brew install gen-ir pod installed, if your projects use CocoaPods or third party tools. Your projects must compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Checks that the podfile or podfile.lock files are present. Runs the command pod install. Checks that the .xcworkspace or .xcodeproj files are present. To build and package the project, runs: xcodebuild clean archive -PROJECT/WORKSPACE filePath -scheme SRCCLR_IOS_SCHEME -destination SRCCLR_IOS_DESTINATION -configuration SRCCLR_IOS_CONFIGURATION -archivePath projectName.xcarchive DEBUG_INFORMATION_FORMAT=dwarf-with-dsym ENABLE_BITCODE=NO The SRCCLR values are optional environment variables you can use to customize the xcodebuild archive command. Runs gen-ir on the artifact of your packaged project and the log files. Saves the artifact in the specified --output location. veracode package --source https://github.com/signalapp/Signal-iOS --type repo --output verascan --trust Packager initiated... Verifying source project language ... Packaging iOS artifacts for IOSPackager project 'MyProject'. iOS Project MyProject zipped and saved to: path/to/verascan/veracode-auto-pack-MyProject-ios-xcarchive.zip Successfully created 1 artifact(s). Created IOS artifacts for IOSPackager project. Total time taken to complete command: 9.001s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you tested to successfully compile your application. Access to a gradlew command that points to the correct JAVA_HOME directory. If gradlew is not available, ensure the correct Gradle version is installed. Your projects must: Have the correct build.gradle file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build the Gradle project and package it as a JAR file, runs the command gradlew clean build -x test. Copies the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-java-gradle --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/example-java-gradle-1.0-SNAPSHOT.jar. Successfully created 1 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 7.174s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you tested to successfully compile your application. Access to a mvn command that points to the correct JAVA_HOME directory. Your projects must: Have the correct pom.xml file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the Maven project, runs the command mvn clean package. Copies the artifact, such as JAR, WAR, EAR, of your packaged project to the specified --output location. veracode package --source path/to/project/example-java-maven --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for Maven project. Copied artifact: path/to/verascan/example-java-maven-1.0-SNAPSHOT.jar. Successfully created 1 artifact(s). Created Java artifacts for Maven project. Total time taken to complete command: 6.799s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The NPM or Yarn package manager installed. The correct Node, NPM, or Yarn version to package the project. Your projects must: Be able to resolve all dependencies with commands npm install or yarn install. Have the correct package.json file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the project, runs one of the following commands: For NPM, runs the command npm install. For Yarn, runs the command yarn install. Copies the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-javascript --output verascan --trust Packager initiated... Verifying source project language ... Packaging Javascript artifacts for NPM project. Project example-javascript packaged to path/to/veracsan/veracode-auto-pack-example-javascript-js.zip. Successfully created 1 artifact(s). Created Javascript artifacts for NPM project. Total time taken to complete command: 3.296s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The correct Kotlin version for your projects. The Maven or Gradle package manager installed. A Java version that your packager manager requires. Your projects must: Have the correct pom.xml, build.gradle, or build.gradle.kts file. Compile successfully without errors. The auto-packager completes the steps shown in the following example command output. Verifies that your project language is supported. Uses Gradle to builds and packages the project. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/kotlin-server-side-sample/gradle --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/demo-0.0.1-SNAPSHOT-plain.jar. Copied artifact: path/to/verascan/demo-0.0.1-SNAPSHOT.jar. Successfully created 2 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 8.632s Before you can run the auto-packager, you must meet the following requirements: Your Perl project must be a version 5.x Your project must contain at least one file with the following extensions: of .pl, .pm, .plx, .pl5, or .cgi The auto-packager completes the following steps, as shown in the example command output. Finds all the files matching the required extensions and packages them in a ZIP archive (artifact). Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/bugzilla --output verascan --trust Packaging code for project bugzilla. Please wait; this may take a while... Verifying source project language ... Packaging code for project bugzilla. Please wait; this may take a while... Verifying source project language ... [GenericPackagerPerl] Packaging succeeded for the path path/to/project/bugzilla. Successfully created 1 artifact(s). Created Perl artifacts for GenericPackagerPerl project. Total time taken to complete command: 9.965s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct PHP version for your projects. Composer dependency manager installed. Your projects must: Have the correct PHP composer.json file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the project source code and lock file with Composer, runs the command composer install. Saves the artifacts of your packaged project in the specified --output location. veracode package --source path/to/project/example-php --output verascan --trust Packager initiated... Validating output path ... Packaging PHP artifacts for Composer project. Project captainhook zipped and saved to path/to/verascan/veracode-auto-pack-captainhook-php.zip. Packaging PHP artifacts for Composer project. Project template-integration zipped and saved to path/to/verascan/veracode-auto-pack-template-integration-php.zip. Successfully created 2 artifact(s). Created PHP artifacts for Composer project. Total time taken to complete command: 3.62s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The correct pip and Python or pyenv version for packaging your project are installed. A package manager configuration file with the required settings to resolve all dependencies. Your projects must compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To resolve all third party dependencies and generate the lock file, PIP install, runs the command pip install -r requirements.txt. Packages the project source code, lock file, and vendor folder. Saves the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-python --output verascan --trust Packager initiated... Verifying source project language ... Packaging Python artifacts for PIP project. Project example-python zipped and saved to path/to/verascan/veracode-auto-pack-example-python-python.zip. Successfully created 1 artifact(s). Created Python artifacts for PIP project. Total time taken to complete command: 14.359s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct version of Node, NPM, or Yarn for your projects. NPM or Yarn installation resolves all dependencies. Have the correct package.json file. Package.json file has the React Native version as a dependency. The auto-packager completes the following steps, as shown in the example command output. For NPM applications, runs the npm install command. For Yarn applications, runs the yarn install command. For Expo build, runs the expo start command. veracode package --source path/to/project/example-javascript-yarn --output verascan --trust Packaging code for project example-javascript-yarn. Please wait; this may take a while... Verifying source project language ... Packaging Javascript artifacts for Yarn project. JavaScript project example-javascript-yarn packaged to: path/to/verascan/veracode-auto-pack-example-javascript-yarn-js.zip Successfully created 1 artifact(s). Created Javascript artifacts for Yarn project. Total time taken to complete command: 1m9.13s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The Bundler package manager installed with the correct Ruby version. The Veracode packager gemfile installed. This gemfile handles pre-processing of Rails projects for Static Analysis. The ability to run the command bundle install Your projects must compile successfully without errors. Optionally, to test your configured environment, run the command rails server. The auto-packager completes the following steps, as shown in the example command output. To configure the vendor path, runs the command bundle config --local path vendor. Runs the command bundle install without development and test: bundle install --without development test. To check for the Rails installation, runs the command bundle info rails. If Rails is not installed, the auto-packager assumes it is not a Rails project and exits. To install the Veracode packager gem, runs the command bundle add veracode. To package your project using the Veracode packager gem, runs the command bundle exec veracode. Saves the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/rails --output verascan --trust Packager initialized... Verifying source project language ... Packaging Ruby artifacts for RubyPackager project 'veracode-rails-20240321225855.zip'. ArtifactPath: /rails/tmp/veracode-rails-20240321225855.zip ValidatedSource: /rails ValidatedOutput: /rails/verascan Project name: rails 44824469 bytes written to destination file. Path: /rails/verascan/rails.zip temporary zip file deleted. Path: /rails/tmp/veracode-rails-20240321225855.zip Successfully created 1 artifact(s). Created Ruby artifacts for RubyPackager project. Total time taken to complete command: 1m27.428s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you have tested to successfully package your application. The Maven, Gradle, or sbt package manager installed with the correct Java version. Your projects must: Have the correct pom.xml, build.gradle, or build.sbt file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Runs the sbt assembly command sbt clean assembly. This command assists in creating a JAR file with dependencies in non-Spring projects, which improves SCA scanning. If sbt assembly fails, runs the sbt package command sbt clean package. Copies the artifacts of your packaged application to the specified --output location. veracode package --source path/to/project/packSample/zio-quill --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for SbtPackager project. Copied artifact: path/to/verascan/quill-cassandra_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-pekko_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen-jdbc_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen-tests_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-core_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-doobie_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-engine_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-h2_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-mysql_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-oracle_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-postgres_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-sqlite_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-sqlserver_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-orientdb_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-spark_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-sql_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-sql-test_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-util_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/zio-quill-docs_2.12-4.8.2+3-d2965801-SNAPSHOT.jar. Successfully created 28 artifact(s). Created Java artifacts for SbtPackager project. Total time taken to complete command: 45.428s
8585
dbpedia
2
89
https://scriptingosx.com/category/deployment/page/4/
en
Deployment – Page 4 – Scripting OS X
https://i0.wp.com/script…=512%2C512&ssl=1
https://i0.wp.com/script…=512%2C512&ssl=1
[ "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/11/cropped-NewShebang-1.png?fit=248%2C248&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2020/02/WranglingPython-Perseus.jpg?resize=800%2C510&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2020/02/WranglingPython-InstallDevToolsDialog.png?w=660&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2018/05/macOSInstallationBanner.jpeg?resize=825%2C510&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/10/Gatekeeper_logo.png?resize=201%2C207&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/06/volumereplication2.png?resize=632%2C510&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/06/volumereplication.png?resize=660%2C351&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/03/MacADUK2019.png?resize=825%2C510&ssl=1" ]
[]
[]
[ "" ]
null
[]
2020-05-13T14:31:00+02:00
Posts about Deployment written by ab
en
https://i0.wp.com/script…it=32%2C32&ssl=1
Scripting OS X
https://scriptingosx.com/category/deployment/
As a System Engineer at an Enterprise Reseller, I have to manage and create many Jamf Pro instances. Some of them are tightly managed and require version control on the OS and the apps. But, many of them are managed less stringently and often the requirement for applications is “install the latest version.” This is not a statement which management strategy is ‘better.’ There are pros and cons for each. There are situations where either is really not appropriate. You will likely have to use a mixed approach for different pieces of software. When you are doing the first, more controlled deployment strategy, you really want to use AutoPkg and not this script. You can stop reading here. Apple’s vision of deployment with ‘Automated App Installation’ through MDM (formerly known as VPP) is similar to the ‘less controlled’ strategy. When you install Mac App Store through MDM commands, then you will get the latest version available. Not all applications are available on the Mac App Store. And even when they are available, installing applications with VPP is still unreliable and hard to debug, or retry when it fails. If you are managing with the “just install the latest version” philosophy, then you probably have one or more scripts that will download and install the latest version of some software from the vendor’s website. This avoids the overhead work of having to download, repackage and manage every new update in the management system. (This can be automated with AutoPkg, but if you can avoid it entirely…) When I started thinking about this, we had at least four different scripts. Most of them were internal, but William Smith’s installer script for Microsoft applications was a huge inspiration. it made me thing that you could generalize much of this. Security Considerations The main danger when downloading application archives and installers directly from the vendor is that a malicious actor might intercept the traffic or even hijack the servers and replace the download with a manipulated software that contains and or installs malware. Since management processes run with root privileges, we have to extra careful which files and processes are installed. For user driven installation, Apple introduced GateKeeper, signed applications and Notarization as a way to verify downloaded software before execution. When you download software with a script, then you are bypassing GateKeeper. This is usually considered a benefit, because in a managed deployment we don’t want to scare and annoy a user with the warning dialogs. But we can use the GateKeeper verification process in our script to verify that the archive, application, or installer is signed and notarized. With the spctl command, we can run the verification from the script without the user interaction. We can even go one step further than GateKeeper. GateKeeper is happy when a software is signed and notarized with any Apple Developer ID. Since this script is working with a curated list of software, we can verify that the application is actually signed with the expected vendor’s Developer ID. This will catch situations where someone creates or steals a Developer ID to sign and notarize a manipulated application. Apple can and will eventually block that Developer ID, but there will be a window where the manipulated application may be downloaded and installed. This is not theoretical, but has happened already. (more than once) Installomator With these ideas in mind, I started working on a script to unify all these installer scripts. (‘The one to rule them all.’) I may have gone a little overboard, but it turned into Installomator. You can run Installomator from the command line or from your management system. > ./Installomator.sh desktoppr The script requires a single argument. The argument is a label that chooses which application to download and install. (As of now, Installomator can handle 56 applications, you can see a list of applications in the repository. Please read the readme in the GitHub repository for more details. Jamf or not I have tried to keep Installomator generic enough that it can be used with platforms other than Jamf Pro. However, we will use it with Jamf Pro, and thus I took the opportunity to add some workflows that Jamf is missing. Drag’n Drop installations “Drag this app to the Applications folder” is a common instruction found on downloaded dmg or zip archives for the Mac. The fact that Jamf Pro has always required repackaging and cannot directly handle application dmgs or zips is mystifying. Also, highly ironic, since Jamf delivers their own management applications in a disk image. Nevertheless, Installomator can deal with apps that are downloaded in zip, tbz, and dmg archives. Blocking Processes Jamf will also happily attempt to install over a running application. So, Installomator will check for blocking processes and either stop the installation at that time or prompt the user and give them a chance to quit the application. (Yes, this is inspired by the behavior of Munki.) Vendor update processes Since Installomator will download and install the latest version of the application from the vendor website, it can be used for updates as well as first installations. If an application has a built-in update process that can be triggered by the script, This can be used instead for updates. So, for Microsoft applications, when the script detects that the app is already installed, it will run msupdate instead of downloading a full installer. This way the update process will use Microsoft’s optimized thin updates. (Credit to Isaac Ordonez, Mann consulting for the idea and first implementation.) So far, this is only implemented for Microsoft applications and Google Chrome. (and quite experimental) Extensible So far, the script can install 56 different applications or application suites. More application descriptions can be added fairly easily, by adding the proper variables. You can find more detailed explanations in the ReadMe, and of course, the existing applications serve as examples. Not all applications are suitable to be installed with Installomator. To be able to install an application, the download URL must be accessible without requiring a login, and there must be some, fairly simple process to determine the URL for the latest version. Installomator will only install the application itself, it will not configure any settings. You will have to use profiles, or additional scripts and installers for that. When you add an application for your own workflow, please contribute as an issue or pull request! Thank you! Installomator and AutoPkg Obviously, much of Installomator’s workflow has been heavily inspired by AutoPkg. I have been using AutoPkg for a long time and provide a repository of recipes. And I plan to continue to use AutoPkg. As mentioned before, Installomator is not suitable for every type of deployment. If you require control over the versions of the software deployed, then you need to download, re-package and manage the packages in your management system. This is obviously what AutoPkg was designed for. Also, not every software can be installed with Installomator, mostly because the installer is not available as a direct download. In these cases, AutoPkg will be useful to automate the management and deployment, even when you management style is less controlling. Going Forward We have been using Installomator for the past few weeks in our own deployment and with one customer. We are now at a point, where we believe it is stable enough to share it and get feedback from other MacAdmins. (I have already shared it with a few, and many thanks to those that have given valuable feedback.) We have been using this script with two smaller deployments and want to roll it out to more of our customers. But we probably haven’t hit all the weird edge cases yet. So, proceed with caution. Consider this a beta release. (Sidenote: I have tested the script with 10.14.6 and 10.15.x. Because it uses the notarization verification which is available in 10.14.4 and higher it will probably not run well on older macOS versions. Might be possible to adapt it though.) If you are as excited about the script as we are, please start testing in your environment and provide feedback. But please, as with anything MacAdmin, don’t just go and push the script to hundreds or thousands of devices, but test, test, test first. Then please provide any enhancements back on the GitHub repository. I have also created an #installomator channel on the MacAdmin Slack for discussion and questions. As I noted in my last Weekly News Summary, several open source projects for MacAdmins have completed their transition to Python 3. AutoPkg, JSSImport and outset announced Python 3 compatible versions last week and Munki already had the first Python 3 version last December. Why? Apple has included a version of Python 2 with Mac OS X since 10.2 (Jaguar). Python 3.0 was released in 2008 and it was not fully backwards compatible with Python 2. For this reason, Python 2 was maintained and updated alongside Python 3 for a long time. Python 2 was finally sunset on January 1, 2020. Nevertheless, presumably because of the compatibility issues, Apple has always pre-installed Python 2 with macOS and still does so in macOS 10.15 Catalina. With the announcement of Catalina, Apple also announced that in a “future version of macOS” there will be no pre-installed Python of any version. Scripting language runtimes such as Python, Ruby, and Perl are included in macOS for compatibility with legacy software. Future versions of macOS won’t include scripting language runtimes by default, and might require you to install additional packages. If your software depends on scripting languages, it’s recommended that you bundle the runtime within the app. (macOS 10.15 Catalina Release Notes) This also applies to Perl and Ruby runtimes and other libraries. I will be focussing on Python because it is used more commonly for MacAdmin tools, but most of this post will apply equally to Perl and Ruby. Just mentally replace “Python” for your preferred language. The final recommendation is what AutoPkg and Munki are following: they are bundling their own Python runtime. How to get Python There is a second bullet in the Catalina release notes, though: Use of Python 2.7 isn’t recommended as this version is included in macOS for compatibility with legacy software. Future versions of macOS won’t include Python 2.7. Instead, it’s recommended that you run python3 from within Terminal. (51097165) This is great, right? Apple says there is a built-in Python 3! And it’s pre-installed? Just move all your scripts to Python 3 and you’ll be fine! Unfortunately, not quite. The python3 binary does exist on a ‘clean’ macOS, but it is only a stub tool, that will prompt a user to download and install the Command Line Developer Tools (aka “Developer Command Line Tools” or “Command Line Tools for Xcode”). This is common for many tools that Apple considers to be of little interest to ‘normal,’ non-developer users. Another common example is git. When you install Xcode, you will also get all the Command Line Developer Tools, including python3 and git. This is useful for developers, who may want to use Python scripts for build operation, or for individuals who just want to ‘play around’ or experiment with Python locally. For MacAdmins, it adds the extra burden of installing and maintaining either the Command Line Developer Tools or the full Xcode install. Python Versions, a multitude of Snakes After installing Xcode or the Command Line Developer Tools, you can check the version of python installed: (versions on macOS 10.15.3 with Xcode 11.3.1) > python --version Python 2.7.16 > python3 --version Python 3.7.3 When you go on the download page for Python.org, you will get Python 3.8.1 (as of this writing). But, on that download page, you will also find download links for “specific versions” which include (as of this writing) versions 3.8.1, 3.7.6, 3.6.10, 3.5.9, and the deprecated 2.7.17. The thing is, that Python isn’t merely split into two major release versions, which aren’t fully compatible with each other, but there are several minor versions of Python 3, which aren’t fully compatible with each other, but are still being maintained in parallel. Developers (individuals, teams, and organisations) that use Python will often hold on to a specific minor (and sometimes even patch) version for a project to avoid issues and bugs that might appear when changing the run-time. When you install the latest version of Munki, it will install a copy of the Python framework in /usr/local/munki/ and create a symbolic link to that python binary at /usr/local/munki/python. You can check its version as well: % /usr/local/munki/python --version Python 3.7.4 All the Python code files for Munki will have a shebang (the first line in the code file) of #!/usr/local/munki/python This ensures that Munki code files use this particular instance of Python and no other copy of Python that may have been installed on the system. The latest version of AutoPkg has a similar approach: > /usr/local/autopkg/python --version Python 3.7.5 In both cases the python binary is a symbolic link. This allows the developer to change the symbolic link to point to a different Python framework. The shebangs in the all the code files point to the symbolic link, which can be changed to point to a different Python framework. This is useful for testing and debugging. Could MacAdmins use this to point both tools to the same Python framework? Should they? The Bridge to macOS On top of all these different versions of Python itself, many scripts, apps, and tools written in Python rely on ‘Python modules.’ These are libraries (or frameworks) of code for a certain task, that can be downloaded and included with a Python installation to extend the functionality of Python. The most relevant of these modules for MacAdmins is the “Python Objective-C Bridge.” This module allows Python code to access and use the native macOS Cocoa and CoreFoundation Frameworks. This not only allows for macOS native GUI applications to be written in Python (e.g. AutoDMG and Munki’s Managed Software Center [update: MSC was re-written in Swift last year]), but also allows short scripts to access system functions. This is sometimes necessary to get a data that matches what macOS applications “see” rather than what the raw unix tools see. For example, the defaults tool can be used to read the value of property lists on disk. But those might not necessarily reflect the actual preference value an application sees, because that value might be controlled by a different plist file or configuration profile. (Shameless self-promotion) Learn more about Property lists, Preferences and Profiles You could build a tool with Swift or Objective-C that uses the proper frameworks to get the “real” preference value. Or you can use Python with the Objective-C bridge: #!/usr/bin/python from Foundation import CFPreferencesCopyAppValue print CFPreferencesCopyAppValue("idleTime", "com.apple.screensaver") Three simple lines of Python code. This will work with the pre-installed Python 2.7, because Apple also pre-installs the Python Objective-C bridge with that. When you try this with the Developer Tools python3 you get an error: ModuleNotFoundError: No module named 'Foundation' This is because the Developer Tools do not include the Objective-C bridge in the installation. You could easily add it with: > sudo python3 -m pip install pyobjc But again, while this command is “easy” enough for a single user on a single Mac, it is just the beginning of a Minoan labyrinth of management troubles. Developers and MacAdmins, have to care about the version of the Python they install, as well as the list of modules and their versions, for each Python version. It is as if the Medusa head kept growing more smaller snakes for every snake you cut off. (Ok, I will ease off with Greek mythology metaphors.) You can get a list of modules included with the AutoPkg and the Munki project with: > /usr/local/munki/python -m pip list > /usr/local/autopkg/python -m pip list You will see that not only do Munki and AutoPkg include different versions of Python, but also a different list of modules. While Munki and AutoPkg share many modules, their versions might still differ. Snake Herding Solutions Apple’s advice in the Catalina Release Notes is good advice: It’s recommended that you bundle the runtime within the app. Rather than the MacAdmin managing a single version of Python and all the modules for every possible solution, each tool or application should provide its own copy of Python and its required modules. If you want to build your own Python bundle installer, you can use this script from Greg Neagle. This might seem wasteful. A full Python 3 Framework uses about 80MB of disk space, plus some extra for the modules. But it is the safest way to ensure that the tool or application gets the correct version of Python and all the modules. Anything else will quickly turn into a management nightmare. This is the approach that Munki and AutoPkg have chosen. But what about smaller, single script solutions? For example simple Python scripts like quickpkg or prefs-tool? Should I bundle my own Python framework with quickpkg or prefs-tool? I think that would be overkill and I am not planning to do that. I think the solution that Joseph Chilcote chose for the outset tool is a better approach for less complex Python scripts. In this case, the project is written to run with Python 3 and generic enough to not require a specific version or extra modules. An admin who wants to use this script or tool, can change the shebang (the first line in the script) to point to either the Developer Tool python3, the python3 from the standard Python 3 installer or a custom Python version, such as the Munki python. A MacAdmin would have to ensure that the python binary in the shebang is present on the Mac when the tool runs. You can also choose to provide your organization’s own copy Python with your chosen set of modules for all your management Python scripts and automations. You could build this with the relocatable Python tool and place it in a well-known location the clients. When updates for the Python run-time or modules are required, you can build and push them with your management system. (Thanks to Nathaniel Strauss for pointing out this needed clarifying.) When you build such scripts and tools, it is important to document which Python versions (and module versions) you have tested the tool with. (I still have to do that for my Python tools.) What about /usr/bin/env python? The env command will determine the path to the python binary in the current environment. (i.e. using the current PATH) This is useful when the script has to run in various environments where the location of the python binary is unknown. This is useful when developers want to use the same script in different environments across different computers, user accounts, and platforms. However, this renders the actual version of python that will interpret the script completely unpredictable. Not only is it impossible to predict which version of Python will interpret a script, but you cannot depend on any modules being installed (or their versions) either. For MacAdmin management scripts and tools, a tighter control is necessary. You should use fixed, absolute paths in the shebang. Conclusion Managing Python runtimes might seem like a hopeless sisyphean task. I believe Apple made the right choice to not pre-install Python any more. Whatever version and pre-selection of module versions Apple would have chosen, it would only have been the correct combination for a few Python solutions and developers. While it may seem wasteful to have a multitude of copies of the Python frameworks distributed through out the system, it is the easiest and most manageable solution to ensure that each tool or application works with the expected combination of run-time and modules. Apple has started shipping Mac models that used to come with Mojave pre-installed with Catalina. If your organization has blockers for Catalina (incompatible software, etc.) you may want to install Mojave on these Macs. Unfortunately, this is not so easy. Important Notice: these instructions will only work for Mac models that can boot to Mojave. Usually a Mac requires at least the version of macOS that the model shipped with when it was introduced. As of this writing, all new Macs require at least Mojave. The exceptions are the iMac Pro (High Sierra) and the MacBook Pro 16“ and the Mac Pro (2019) which both require Catalina. You cannot use these instructions to force a Mac Pro or MacBook Pro 16” to boot to Mojave. Any new Mac models that Apple introduces from now on, will also require Catalina and cannot be downgraded to Mojave. (Not meant as a challenge. I am aware that someone might be able to hack together a Chimera Mojave with Catalina drivers. These ‘solutions’ are not supportable on scale.) Directly downgrading from Catalina to Mojave with the startosinstall --eraseinstall command will fail. Attempts to run the Mojave installer from a Catalina Recovery (local or Internet) will also fail. The reason seems to be that the Mojave Installer application chokes on some aspect of Catalina APFS. Apple is likely not very motivated to fix this. So far, the recommendation has been to boot to Internet Recovery with the shift-command-R key combination at boot. This used to boot to a Mojave (more specfically, the system the Mac shipped with) recovery system, and then you can wipe and re-install Mojave. However, if a Mac was shipped with Catalina pre-installed, it will boot to Catalina Internet Recovery, regardless of whether the Mac can boot to Mojave or not. We have to get creative. External USB Installer The solution requires a Mojave Installer USB disk. First download the latest Mojave installer. You can do so from by following this App Store link. If you are running Catalina, you can also use the new option in softwareupdate: > softwareupdate --fetch-full-installer --full-installer-version 10.14.6 Then you can use the createinstallmedia command in the Install macOS application to build an external Installer Drive on a USB drive. You probably want to add the --downloadassets option to add the current firmware to the USB drive as well. > createinstallmedia --volume /Volumes/Untitled --downloadassets This will delete the target volume data on the USB disk. Enable External Boot To boot a new Mac with a T2 chip off an external drive, you need to allow external boot from the Security Utility in the Recovery partition. This utility is protected and requires the password of a local administrator user to access. When you get a new Mac “out of the box,” you cannot directly boot to Recovery to change this. Instead, you have to boot to the pre-installed Catalina, work your way through the Setup Assistant, and create a local administrator user before you can boot to Recovery to change this setting. You also need to connect the Mac to a network with non-filtered/proxied access to Apple’s servers, either with Wifi or an ethernet adaptor. You can see which services and servers the network needs to be able to access in this kbase article. You will definitely need the servers listed under ‘Device Setup’ from that list and many of the others, depending on your deployment workflow. This network connection is required to verify the integrity of the system on the USB Installer drive. You could also disable ‘Secure Boot’ entirely, but that is not recommended as it will, well, disable all system security verifications. Now, reboot the Mac and hold the option key, from the list of devices to boot from, select the Mojave Installer drive. Once booted to the Mojave installation drive, start Disk Utility. In Disk Utility, erase the entire internal drive. You may have to choose ‘Show All Devices’ from the View menu to be able to select the internal drive with all sub volumes, not just the system or data volume. Then you can quit Disk Utility and start the Mojave installation process. After completing the installation, you want to remember to return to Recovery and re-disable external boot again. However, you need to create a new admin account on the disk before you can do that… Avoiding the Downgrade This is obviously tedious and really hard to automate. (I have been wondering if you could build a MDS workflow, but this one would require at least three reboots.) The preferred solution is for IT departments and organizations to have the workflows and infrastructure in place to support and use “latest macOS” (Catalina). Apple is discouraging system downgrades or using anything but “latest macOS.” On newer hardware — like the MacBook Pro 16″, Mac Pro 2019, and every new Mac Apple will introduce from now on — downgrading to Mojave is not possible at all, so you have to support Catalina when you (or your users) get those Mac models. As mentioned before, I do not believe there is much motivation at Apple to simplify this particular workflow. It serves Apple’s interest and vision to push the latest macOS over previous versions. From a user perspective it allows better integration with their iOS and other Apple devices. From a security standpoint it provides the latest security updates and patches. Apple provides security updates for the previous two macOS versions, but those notoriously do not fix all the vulnerability that the latest macOS gets. However, in some cases you may have blocking applications that cannot run, or cannot be upgraded to run on Catalina. Then this workflow can be a ‘last ditch’ solution until you get those ‘blockers’ sorted out. Maybe the best solution is to use this complex and work intensive downgrade workflow as leverage to push for “latest macOS” support in your organization. Thanks to Robin Lauren and Mike Lynn for figuring this out on MacAdmins Slack and sharing their results. There is a new update to my book “macOS Installation!” It contains lots of updates regarding Catalina, and the usual list of typos and other fixes. As usual, the update is free when you already own the book. If you have already purchased the book, you can go to Apple Books application on your Mac and choose ‘Check for available Downloads…’ from the ‘Store’ menu. I have seen the Mac Books app be really slow (or even completely blind) in picking up updates, you can accelerate the process by removing the local download and re-downloading the book. In iOS tap on your iCloud account icon next to ‘Reading Now’ and then choose ‘Updates.’ If you have not yet purchased the book, I have good news for you: I have lowered the price! Get “macOS Installation” on Apple Books! Why did I lower the price? Let me explain… This is the fourth update for “macOS Installation.” It might be its last. When I first published the book in June 2018, I promised updates until the Mojave release. There have now been two updates beyond that: one for the Mojave “Spring” update, and another one for Catalina. The format of the book had the original intention to help MacAdmins learn about and deal with the strange, new post-imaging world that came with the High Sierra and T2 Security chip. I like to believe it did that quite well. But since then, the releases of Mojave and Catalina have added more layers of complexity and information on top of that. The post-imaging world isn’t new anymore. It is still strange, complicated, and sometimes hard to navigate. However, I feel that the book’s format would have to change to keep being a useful guide. Obviously, such a re-structuring is a massive effort and would pretty much result in a new book. Maintaining and updating a book is a lot of effort, re-writing it even more so. Thus the decision that this might be last update for “macOS Installation.” Depending on how disruptive the changes in the Catalina “spring” update will be, I might update for those, but I am not planning to update the book for 10.16 next year. I might work on some new book on macOS deployment and management in the future. However, I have a few other topics I want to publish before I do that, so that might be a while. Charles Edge’s and Rich Trouton’s new book should be a great successor to “macOS Installation”: Rich Trouton’s and Charles Edge’s “Apple Device Management: A Unified Theory of Managing Macs, iPads, iPhones, and AppleTVs”: pre-order on Amazon US, UK, DE (Affiliate Links) “macOS Installation” should remain useful for the life time of Catalina, which, depending on your deployment practices should be another one to four years, more if 10.16 and 10.17 do not drastically change everything again. Readers who bought the book 16 months ago got several updates for free. I believe free updates are one of the great value propositions of self-published digital books. Most computer related information changes quickly these days and being able to update digital books is a great way to extend their lifetime, usefulness, and value. My plan to not further update for “macOS Installation” thus lowers its value a bit, and to reflect that I am lowering its price in the store. That said, I am convinced the book is still very helpful and full of useful information as it is, so if you have not bought the book yet, this is your chance! Get “macOS Installation” on Apple Books! Changes in this version (you can also find this in the book in the ‘Version History’ section): added “Moving to zsh” to More Books and updated links to new Apple Books format extended the explanation on FileVault and the Secure Token added Catalina System Volume Layout description added instructions to block the macOS Catalina download added an explanation for the expiring installer certificates from October 2019 updated download links for Older macOS Versions added notes to NetBoot-based Installation regarding its further demise and the removal of System Image Utility from Catalina added information on new softwareupdate features in Catalina to macOS Installer Application added a section on new Catalina features added a description of new stub Installer application behavior with startosinstall added link to new SecureToken documentation updated text and tables to reflect the 2019 iMacs clarified reboot behavior of Mojave and High Sierra with Custom Packages added a list of MDM commands that require DEP now using the term ‘conventional’ Macs to refer to non-Secure Boot or pre-T2 Macs many typos, minor changes and clarifications Apple introduced Notarization in macOS Mojave. Since its introduction Apple has kept increasing the use of notarization checks in macOS. For macOS Catalina, Apple has been very vocal saying that Notarization is a requirement for distribution of Applications outside of the Mac App Store. This has left many MacAdmins confused and concerned. A large part of the work as a MacAdmin consists of (re-)packaging applications, configuration files and scripts so they can be distributed in an automated fashion through a management system, such as Jamf Pro, Munki, Fleetsmith, etc. Do MacAdmins need to notarize all the package installers they create as well? Do MacAdmin need to obtain an Apple Developer ID? How should MacAdmins deal with notarized and non-notarized applications and installers from third parties? This post is an attempt to clarify these topics. It’s complicated and long, bear with me… Signed Applications Apple’s operating systems use cryptographic signatures to verify the integrity and source of applications, plug-ins, extensions, and other binaries. When an application, plug-in, extension, or other binary (from now on: “software”) is signed with a valid Apple Developer certificate, macOS (or iOS, tvOS, and watchOS) can verify that the software has not been changed or otherwise tampered with since it was signed. The signature can verify the source of the signature, i.e the individual Developer account or Developer team whose Developer identity was used to sign the software. If the contents of the software were changed for some reason, the verification fails. The software can be change by accident or with malicious intent, for example to inject malicious code into an otherwise beneficial piece of software. Since Apple issues the Developer IDs, they also have the option of revoking and blacklisting certificates. This usually happens when a Developer ID has been abused to distribute malware. The Malware Removal Tool or MRT is the part of the system that will identify and block or remove blacklisted software. App Store Distribution Applications distributed through Apple’s App Stores have to be signed with a valid Developer ID. A developer needs to have valid subscription ($99 per year for individuals and $299 for organizations) to obtain a certificate from Apple. When a developer submits software to an App Store on any Apple system, the software will be reviewed by Apple to confirm whether it meets the various guidelines and rules. This includes a scan for malware. App Store applications also have to be sandboxed, which means they can only access their own data (inside the “sandbox”) and not affect other applications, services, or files without certain “entitlements” and, in many cases, user approval. App Store rules and regulations and sandbox limitations preclude many types of applications and utilities. On iOS, tvOS and watchOS, they are the only way for developers to distribute software to end users. Apple provides a method for Enterprises and Organizations to distribute internal software directly without going through the App Store and App Store review. This should be limited to distribution to employees and members of the organization (such as students of a university or school). This method has infamously been abused by Facebook and other major companies which lead to Apple temporarily revoking their certificates. (We will not discuss Enterprise App Distribution in this post.) There is also much criticism about how realistic Apple’s rules and guidelines are, how arbitrary the review process is, and whether the sandbox restrictions are useful or unnecessarily draconic. A lot of this criticism is valid, but I will ignore this topic in this post for the sake of simplicity and brevity. Software downloaded from the App Store is automatically trusted by the system, since it underwent the review and its integrity and source can be verified using the signature. In the rare case that some malicious software was missed but the review process, Apple can revoke the Developer certificate or blacklist the software with the Malware Removal Tool. Distribution outside of the Mac App Store: Gatekeeper and Quarantine As mentioned before, iOS, tvOS, and watchOS applications have to distributed to end users through the App Store, be signed with a valid Developer ID and under go the review. Because the Mac existed a long time before the Mac App Store, software vendors have many ways of distributing software. Originally software was sold and delivered on physical media (Floppy Disks, CDs, and DVDs), but we with the rise of the internet, users could simply download software from the developer’s or vendor’s website or other, sometimes dubious, sources. Apple has (so far) accepted and acknowledged that these alternative means of software distribution and installation are necessary on macOS. To provide an additional layer of security for the end user in this use case, Apple introduced Gatekeeper in OS X 10.8 Mountain Lion. When a user downloads a software installer or archive from the internet it is ‘quarantined.’ When the user attempts to install or launch the software for the first time, Gatekeeper will evaluate the software. There are many steps in this evaluation, and Howard Oakley explains the process in much detail in this post. Howard Oakley: Will Gatekeeper let me run that app in Catalina? You can see the quarantine flag with the xattr command: % xattr ~/Downloads/somefile.pkg com.apple.macl com.apple.metadata:kMDItemWhereFrom com.apple.quarantine You can delete the quarantine flag with xattr -d com.apple.quarantine path/to/file. Usually, there is no real need to. The first step of the evaluation is verifying the software’s signature, Developer ID, integrity. When encountering an unsigned piece of software the user will be presented with a warning dialog. Users with administrator privileges can bypass Gatekeeper by choosing “Open” from the context menu instead of double-clicking to open. Gatekeeper can be completely disabled with the spctl command, though this is not recommended. The Developer signature provides a way to verify the source and integrity of a piece of software, but since the distribution happens outside of Apple’s control, a malicious developer could still put any form of malicious code in the signed software to keep Gatekeeper happy. As long as the malware avoids widespread detection it will look good to Gatekeeper and the end user. Even when the malware is detected by Apple and the Developer ID is revoked, it is not hard for a malicious developer to obtain or steal a new Developer ID and start over. Enter Notarization Apple needed another layer of security which could scan software for known malware and enforce a certain set of security rules on third party software, even when it is distributed outside of the Mac App Store. Note: I find the effort Apple is putting in to Gatekeeper and Notarization quite encouraging. If Apple wanted to restrict macOS to “App Store only” distribution in the near future, this effort would not be necessary. This shows that Apple still acknowledges the important role that independent software distribution has for macOS. To notarize software, a developer has to sign it with their Developer ID, and upload it to Apple using Xcode or the altool command. Then Apple notarization workflow will verify that the software fulfills certain code requirements and scans for certain malware. The exact details of what is considered malware are unknown. However, we do know that the process is fully automated and, unlike the App Store approval process, does not involve human reviewers. If the software has passed the notarization process the result will be stored on Apple’s servers. When Gatekeeper on any Mac verifies the software it can confirm the notarization status from Apple’s servers. Alternatively, a developer can ‘staple’ a ‘ticket’ to the software, which allows Gatekeeper to confirm the notarization status without needing to connect to Apple. Apple Support: Use Apple products on enterprise networks (Look for ‘App notarization’) When Gatekeeper encounters a quarantined software that is notarized, it will show the familiar Gatekeeper dialog with an additional note that: “Apple checked [the software] for malicious software and none was detected.” Since 10.14.5, When Gatekeeper encounters signed software that is not notarized it will show the standard dialog but with an additional yellow warning sign. Apple Support: Safely open apps on your Mac As with the previous Gatekeeper checks for a valid signature an administrator user can override the check by choosing ‘Open’ from the context menu instead of double-clicking to open. In Mojave notarization was enforced in Gatekeeper checks for kernel extensions and in 10.14.5 for software with new Developer IDs, which where created after June 2019. Starting with Catalina, all software needs to be notarized to pass Gatekeeper when the first launch or installation is initiated by a user. However, the warning can still be overridden by an administrator user using the context menu. What can be Notarized As of now, the following pieces of software can be notarized: Application bundles Kernel Extensions Installer Packages (pkg), Disk images (dmg) and zip archives When you are building other types of software, such as command line tools, you can (and should) place them in one of the archive formats. The preferred choice for MacAdmins should be an installer package (pkg) since it will also place the binary in the correct location in the file system with the correct access privileges. What cannot be Notarized You should not notarize a binary or application that you did not sign! The Developer ID used to sign a binary (application or command line tool) should be the same as the Developer ID used to submit the software for notarization. Apple has loosened the requirements for notarization until Jan 2020 to give developers some extra time to adapt. Once the requirements return to the full restrictions an attempt to notarize third party software with a different Developer ID will fail. (Existing notarizations will remain valid after that date.) Apple Developer: Notarizing Your Mac Software for macOS Catalina Installer command When you install software using the installer command from the Terminal or a script, it will bypass quarantine and the Gatekeeper check. This is also true when you install software using a management system such as Jamf Pro, Munki, Fleetsmith, etc. Software you re-package as a MacAdmin for distribution through management systems does not need to be notarized. Given this and the limitations on notarizing third party software above, you should very rarely need to notarize as a MacAdmin. Example: Re-packaging third party software from dmg A lot of applications for macOS are distributed as disk images. The normal end user workflow would be to mount the dmg after downloading, and then copying the application from the dmg to the /Applications folder. There are two steps where Gatekeeper might trigger: when you mount the disk image and when you launch the application after copying for the first time. To pass both these checks, a developer should prudently notarize both the disk image and the application. Google Chrome for example does exactly that, avoiding the Gatekeeper warning. We can verify this with the spctl command: % spctl -a -vv -t install ~/Downloads/googlechrome.dmg /Users/armin/Downloads/googlechrome.dmg: accepted source=Notarized Developer ID origin=Developer ID Application: Google, Inc. (EQHXZ8M8AV) % spctl -a -vv -t execute "/Volumes/Google Chrome/Google Chrome.app" /Volumes/Google Chrome/Google Chrome.app: accepted source=Notarized Developer ID origin=Developer ID Application: Google, Inc. (EQHXZ8M8AV) Unfortunately, some management systems don’t understand “Apps in disk images” as a distribution method. For these systems MacAdmins need to re-package the application into a pkg. You can do that quickly with pkgbuild: % pkgbuild --component /Volumes/Google\ Chrome/Google\ Chrome.app --install-location /Applications/ GoogleChrome.pkg pkgbuild: Adding component at /Volumes/Google Chrome/Google Chrome.app pkgbuild: Wrote package to GoogleChrome.pkg or use quickpkg. This new installer package will be neither signed nor notarized: % spctl -a -vv -t install GoogleChrome.pkg GoogleChrome.pkg: rejected source=no usable signature When you send this installer package to another Mac with AirDrop, the receiving system will attach the quarantine flag. And when you double click it, you will get the Gatekeeper warning. However, when you can still install it using the installer command in Terminal, which bypasses the Gatekeeper system, just as your management system will: % installer -pkg ~/Downloads/GoogleChrome.pkg -tgt / Alternatively, you can choose “Open” from the context menu in Finder to bypass Gatekeeper. However, this is not something you want to teach your end users to do regularly. Firefox can be downloaded as a disk image as well as a installer package. While the application inside both is notarized, neither the disk image nor the installer package are. The disk image mounts with no issues, but when you try to open the installer pkg by double-clicking you will get the expected notarization warning. Nevertheless, the pkg will work fine after importing to your management system. Edge cases There are some cases where notarization would be useful for MacAdmins but might not even be possible. I met a MacAdmin working at a university at MacSysAdmin last week. They need to re-package a VPN client with customized configuration files to be installed on student-owned machines. There is really no solution without the students running into the notarization warning. Teaching the users how to bypass Gatekeeper is not a good solution. In these cases you have to work with the software vendor and Apple to find a workable solution. Summary Notarization is a new security layer introduced by Apple in Mojave. The restrictions imposed on non-notarized software increase in Catalina. When an Application is installed or launched for the first time by the user (by double-clicking) Gatekeeper will verify the signature and notarization status and warn the user if any are missing. Developers should sign and notarize their applications and tools. Mac Administrators should not notarize applications and tools from third parties. Applications and packages installed through management systems bypass Gatekeeper and do not need to be notarized. Conclusion Apple is loudly messaging that notarization is absolutely required for applications in Catalina. While this message makes sense for the developers building the software, it does not apply to administrator who re-package third party software for distribution through management systems. MacAdmins should join Apple in demanding signed and notarized binaries and installer packages from developers. However, MacAdmins can also continue their current workflows for re-packaging and distribution. Links Scripting OS X Notarize a Command Line Tool Check Installer Pkgs for deprecated scripts Tom Bridge Mac Admins Talk: The Loyal Order of Notaries Notarization Follow-Up and Video Apple Updates Notarization Requirements Manipulating the System Policy Database with Configuration Profiles Manipulating the System Policy Database with Configuration Profiles, Part 2 Howard Oakley Will Gatekeeper let me run that app in Catalina? Notarization devalued? The ‘hardened runtime’ explained Apple Safely open apps on your Mac Notarizing Your Mac Software for macOS Catalina Resolving Common Notarization Issues All About Notarization (WWDC 2019) Advances in macOS Security (WWDC 2019) At WWDC last week, there was a very interesting session on “Apple File Systems” (APFS). It covered the new split system layout in macOS Catalina with a read-only system volume, volume replication with APFS, and how external USB drives and SMB works on iPadOS. The entire session is very interesting and well worth watching. Go ahead, I’ll wait… Around the 13 minute mark, during the ‘Volume Replication’ segment, the engineer on stage talks about using asr (Apple Software Restore) tool to ‘replicate’ a system volume to several computers at once and gives the example of a computer lab. He then proceeds to explain the new options in asr regarding APFS volumes and snapshots. The new features are hugely interesting and I think they will be very useful for backup solutions. There will probably be some applications for MacAdmins, but I disagree with the engineer on stage and some MacAdmins on Twitter and Slack: Catalina will not bring a revival of imaging. Note: I wrote a book on this: “macOS Installation for Apple Administrators” What killed imaging? Back in the Sierra days, there was this idea that the introduction of APFS would ‘kill’ imaging. The asr tool relied on many HFS+ behaviors and it was questionable that Apple could or would maintain that for APFS. But while there were some changes to asr in the High Sierra and Mojave upgrades, it still worked. What killed imaging as a process for MacAdmins was the T2 system controller, first introduced with the iMac Pro. There are two main aspects: NetBoot and external boot are defunct Firmware needs to be updated with the system Netboot and external boot are defunct To re-image or re-install a system, you have to boot it off a different system volume (NetBoot, Recovery, external drive). Alternatively, you can put the system into target disk mode and image or install the system directly on the internal drive. On Macs with the T2 system controller, NetBoot is explicitly defunct. External boot is disabled by default. It can be re-enabled, but the process is convuluted, requires at least one full setup process, and cannot be automated. This leaves Recovery as the system to use to replace the system volume and, not surprisingly, there are a few tools that have focussed on using Recovery in the new T2 Mac world: Twocanoes MDS installr bootstrappr Firmware needs to be updated with the system You could also put the target Mac in target disk mode and image its system. This will work, as long as the system on the image is the same version as the system that was installed before. We have been warned about this in the infamous HT208020 support article: Apple doesn’t recommend or support monolithic system imaging as an installation method. The system image might not include model-specific information such as firmware updates. Modern Macs don’t just require a few files on disk to make a bootable system. Inside your Mac are several subsystems that require their own systems (i.e. firmware) to run. Most prominent are the T1 or T2 system controllers which are actually independent custom ARM-based processers running a system called ‘iBridge’ which is an iOS derivate. If you just exchange the ‘normal’ system files on the hard drive over TDM, without also updating the various firmwares in the system, you may get your Mac into state where it cannot boot. This was most obvious with the macOS High Sierra upgrade. After re-imaging a 10.12 Sierra Mac to High Sierra running on APFS, would lead to a Mac that could not read the new system volume. The firmware update that came with High Sierra is needed, so the firmware can mount, read and start the APFS system volume. How can I update or upgrade? For security, only Apple’s ‘Install macOS’ application and the intermediate software and security update packages have the necessary entitlements to change the built-in firmware(s). Firmware updates can be in system updates (minor version updates, i.e. 10.14.4 to 10.14.5), security updates, and major system upgrades (i.e. 10.13 to 10.15). There are three options to apply a system update (e.g. from 10.14.4 to 10.14.5) or security update: ‘Install macOS *’ application, either manually or with the startosinstall tool Software Update, either manually or through the command line tool system or security update pkg installer downloaded from support.apple.com When you want to upgrade a Mac through a major version change (e.g. 10.13 to 10.14 or 10.15), there is only one option: ‘Install macOS *’ application, either manually or with the startosinstall tool The one remaining use case for imaging Given the above limitations, there is one use case left for imaging. When you have full control over the macOS version installed on the Mac and its firmware and the image matches that version, then you can image. However, since NetBoot and external boot are defunct, you will have to image either over target disk mode (fast, but only a few Macs at a time) or using the Recovery (hard to automate, comparatively slow). The remaining strength of the imaging workflow is the raw speed. Some application suites measure several Gigabytes, if not tens of Gigabytes. With installation workflows, these have to be downloaded, decompressed (pkg installers are compressed archives) and copied to the system drive, a process that takes a lot of time. With imaging, these can be layed down with fast block copies. For example, the re-installation of a MacBook Pro I tested recently took about 25 minutes. This time includes downloading the 6GB ‘Install macOS’ application and the entire re-installation process. (I could probably have sped this up with a caching server or by pre-installing the full ‘Install macOS’ applications.) If I could have used imaging this would take 2–3 minutes. If you are in a situation where you have to restore Macs to a pre-defined state frequently and quickly, then imaging might still be a useful workflow. One use case may be MacBooks that get frequently handed out as loan units, where the users get administrative privileges, so they can install extra software and configure the loan units. You will have to invest extra effort during updates or upgrades to apply them first on the devices, to ensure the firmware gets updated, and then to update the image, as well. In some use cases this extra effort can be worthwhile. MDM (and DEP) is required With modern macOS there are other considerations for deployment that make classic imaging workflows less practical. Before macOS 10.13 High Sierra, MacAdmins could manage their Mac fleet without an MDM server. In High Sierra 10.13.4 Apple added two things to the MDM protocol: ‘user-approved’ MDM Kernel extension white listing via configuration profile The second feature (white listing Kernel extensions) requires the first (user-approved MDM). You cannot manage Kernel Extensions or Privacy Preferences Control settings in Mojave, with out a user-approved MDM. In mosts organizations, these are not limitations you can work around. An MDM is now a requirement to manage Macs in an organization. From what we can glean from the WWDC sessions, the (UA)MDM controls will be increased even further with Catalina. It will be driven even further: DEP or ‘Automated Device Enrollment’ with Apple Business Manager or Apple School Manager will be required for some new management features, such as ‘bootstrap tokens’ for FileVault. Each Mac client needs to be enrolled in the MDM individually. The MDM enrollment cannot be part of an image. The easiest way to get a Mac enrolled is with Automated Device Enrollment (formerly known as DEP), which happens at first boot after installation. Third party software It is not just the macOS system that needs to individually enroll with the MDM server. Many third party solutions now also require subscriptions or licenses to be activated on each device individually. All these additional configurations that need to happen after installation or imaging, decrease the usefulness of including all software and configuration in an image. Patching and software updates Most imaging deployments, used a workflow where the image was kept ‘static’ or ‘frozen’ for longer periods of time, usually six or twelve months. This will minimize the effort to update the image, system and software. However, modern operating systems and third party software have update frequencies of 4–10 weeks. Modern security requirements will require these updates to be applied in a timely matter. Critical security problems can strike at any time, requiring fast updates from the vendors and the Mac Admins. As with the MDM above, having a system in place that allows the MacAdmin to easily and quickly deploy and, when necessary, enforce an update or patch to the entire fleet of devices is an important requirement. Software and patch management of non-App Store applications is not part of the MDM protocol. Nevertheless, many MDM solutions also include additional functionality for software management, with varying degree of usefulness. Some MacAdmins prefer to combine their MDM solution with the open source solution Munki instead. Munki is considered to be the best software management solution for macOS, but does not include MDM functionality itself. Whichever software management solution you use, once you have that in place, it will be easier to manage (i.e. install and enforce) software through the management system, than to keep an image up-to-date and re-applying it. You will end up with a ‘thin’ base image and everything else deployed and managed by the management system. At that point you might as well switch to an installation based workflow. But, the engineer on stage said… Here are all the limitations on imaging, summarized: NetBoot and external boot are defunct system firmware needs to be updated with the system MDM and DEP are required frequent security updates and patches require continuous software management None of these limitations are addressed by the changes to the asr tool in Catalina. Changes in other areas of the system in Catalina will actually re-inforce some of these limitations. Imaging is still dead. But why even have asr, then? The asr tool exists because Apple needs a tool to image the operating system to new Macs in the factory. Obviously, Apple has absolute control over the versions of macOS and firmwares deployed to the systems, so they ensure they all match. Speed is a priority, so Apple needs and maintains asr. Other uses of asr, including the use as an imaging tool for administrators have always been secondary. As mentioned earlier, when your environment has similar requirements (fast re-deployment) and can provide tight control over the macOS and firmware versions, then imaging might still be a useful workflow for you. You can already do this with High Sierra or Mojave. You do not have to wait for the new Catalina features for this. In general, a simpler (albeit slower) installation-based workflow is less complex to deploy and maintain. (Imaging might seem less complex, because it is more familiar.) So, the new features in the presentation are pointless? The other use case for asr in the presentation, backups, are very exciting. They will allow the system to take a snapshot and then copy the data of the snapshot to a backup while the system keeps running and changing files. You may also be able to restore a system from a snapshot stored elsewhere. The split of system volume and user data volume in Catalina is also very intriguing for Mac Admins. This may of course, break some third party software. (Start testing now.) But it may also open up new options for management. One of these (user enrollment) is introduced in the “Managing Apple Devices” WWDC video. One possible workflow could be to snapshot and/or image the data volume and leave the system volume intact (you have to, it is read-only and SIP protected). It is still questionable how well this might work, since the firmlink connections between the system and the data volume might not survive the replacement of their targets. You can start testing this now, but keep in mind that the details of the new file system layout will still change during the beta phase. Summary The changes introduced to the file system in macOS Catalina at WWDC are major and will enable new workflows for MacAdmins. Start testing Catalina now. The limitations that ‘killed’ imaging, still apply or might be re-inforced. Imaging is still dead Last week, Apple posted one of the first support articles specifically for macOS Mojave: Apple Support: Prepare your institution for iOS 12 or macOS Mojave This article is not quite the bombshell that the infamous HT208020 for High Sierra is. However, in contains a few firecrackers which will affect many Mac deployments. You can test deployments with the public beta or developer release of Mojave right away. Update: Apple has posted a new article describing how to avoid this with a Privacy Configuration Profile. Ben Toms has a wonderful summary. The piece of information I want to focus on for this post affects Apple Remote Desktop client configuration (called ‘Remote Management’ in the ‘Sharing’ preference pane). Mac Admins have been using the command line tool kickstart to enable and configure Apple Remote Desktop access on clients with scripts through a management system. Apple Support: Use the kickstart command-line utility in Apple Remote Desktop Scripting OS X: Control Apple Remote Desktop Access with Munki In macOS Mojave, Apple will restrict the functionality of kickstart: For increased security, using the kickstart command to enable remote management on a Mac will only allow you to observe it when sharing its screen. If you wish to control the Mac while sharing its screen, enable remote management in System Preferences. This continues Apple’s effort to require user interaction for every configuration that can provide on going access to sensitive data or the system a Mac, like User-Approved MDM and the new privacy controls. What this means for Admins If you rely on Apple Remote Desktop for remote control and remote assistance, this will disrupt your installation workflow. The kickstart tool will enable ARD access and configure the users but not enable any access privileges. You get a nice (red) warning in the shell and when you go into the Remote Management preference pane, no active access is enabled. You can only manually enable the access privileges in the ‘Sharing’ preference pane, which requires administrator privileges to unlock. You can still use kickstart to disable Remote Management access. This limitation extends to Screen Sharing. You can enable Screen Sharing (when ARD/Remote Management is disabled) from the command line with: $ sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.screensharing.plist You have to restart System Preferences to pick up the change in the UI. This command will enable Screen Sharing access, but it will be observe only. (Note: you can use launchctl unload ... to disable.) When you enable Screen Sharing manually in the ‘Sharing’ preference pane, it will grant full access. Workarounds I have (so far) been unsuccessful in determining where the restricted access setting is stored. My suspicion is that the TCC database is involved. If the setting is controlled from a protected settings file or database, then you can at least read that to determine the state and use that information to trigger a notification to the user that action is required. So far, however, I cannot get this information yet. If you find any means of determining the state, please let me know and I will update this post. Unfortunately, Apple has provided no alternative means of controlling screen sharing or ARD with configuration profiles from a UAMDM, leaving admins stranded without an automated solution. Update: Rich Trouton’s recent post on an managing ARD access with with user groups is of interest to admins encountering this problem. Admins that require ARD or Screen Sharing will have to rely on users actively enabling the settings. To make things worse, unlike the approval of an MDM profile or a kernel extension, the ‘Sharing’ preference pane is locked for standard (non-admin) users. You can either provide a way for users to be temporarily promoted to admin or modify the Authorization database to allow standard users to unlock the ‘Sharing’ pane. However, unlocking the Sharing pane allows access to many more critical services, which is untenable from a security perspective. It will have to be seen how this affects third party remote access applications. I have tried setting up Edovia Screens Connect on my Mojave virtual machine, but that currently uses the native Screen Sharing/Remote Management client, so it will encounter the same limitations. Other third party tools may have their own clients. As usual, please, provide your feedback to Apple through the usual channels (macOS beta Feedback app, bugreport, your Apple representative, SE or technical support).
8585
dbpedia
3
8
https://docs.veracode.com/r/compilation_packaging
en
Package your code
https://docs.veracode.co…code-favicon.png
https://docs.veracode.co…code-favicon.png
[ "https://docs.veracode.com/img/Veracode_Docs_Logo_Light_Mode.svg", "https://docs.veracode.com/img/Veracode_Docs_Logo_Dark_Mode.svg" ]
[]
[]
[ "" ]
null
[]
2024-07-23T04:14:41+00:00
Veracode provides specific requirements for compiling and packaging your application code to ensure successful Static Analysis scans.
en
/img/veracode-favicon.png
https://docs.veracode.com/r/compilation_packaging
Veracode provides specific requirements for compiling and packaging your application code to ensure successful Static Analysis scans. This section provides specific instructions for Veracode-supported languages and platforms. Additionally, review this general guidance that applies to all Veracode static scans. To package your code automatically, see About auto-packaging. You can use the Veracode Packaging Cheat Sheet to generate language-specific packaging guidance for Static Analysis. For language support specific to Veracode Pipeline Scan, see Pipeline Scan supported languages. The Veracode Platform requires an executable set of files to perform a static scan. Individual libraries or DLLs that support a main executable generally require the executable to perform an adequate scan. You must upload all executables. Where possible, upload first-party dependent libraries to improve the quality of the scan. Veracode notifies you of any missing dependencies before the scan begins. You have the opportunity to upload them. If you want source file and line number information for flaws, you must upload the debug symbols for the application, either PDB files for Windows binaries, or applications built including debug symbols according to the instructions in this document. You must upload debug symbols for C/C++ and iOS applications. In general, for a successful upload of files to Veracode, follow these basic guidelines: Only upload files with names consisting of printable, UTF-8 characters. Only upload applications built using UTF-8 encoding. Do not upload obfuscated binaries. Do not upload installer packages, such as Linux RPM or Windows InstallShield. Do not upload Classic ASP applications in the same scan with application code written in other languages. You can upload archives of multiple application files in these formats: ZIP, TAR, TAR.GZ, TGZ. The Veracode Platform expands the archive and lists all the executable files it finds inside. These rules apply to uploading archives: Do not upload a password-protected archive. The Veracode Platform securely encrypts all files that are uploaded. It is not necessary to password protect the archive, and the Veracode Platform is not be able to expand it if a password is present. Do not upload archives of archives. The Veracode Platform only expands the top level of archives and does not proceed if it finds additional archives inside (except for JARs, EARs, and WARs). When using tar to combine multiple files, use the -h option to ensure that tar archives the file that the symbolic link points to, rather than archiving the symbolic link. Veracode does not support the RAR archive format.
8585
dbpedia
3
92
https://rtrouton1.rssing.com/chan-7978339/article496.html
en
Automating AutoPkg runs with autopkg-conductor
[ "https://pixel.quantserve.com/pixel/p-KygWsHah2_7Qa.gif", "https://derflounder.files.wordpress.com/2018/07/autopkg_new_package_message.png?w=298&h=126", "https://derflounder.files.wordpress.com/2018/07/autopkg_no_new_package_message.png?w=299&h=115", "https://derflounder.files.wordpress.com/2018/07/screen-shot-2018-07-05-at-10-38-32-pm.png?w=299&h=220", "https://derflounder.files.wordpress.com/2018/07/screen-shot-2018-07-05-at-10-13-10-pm.png?w=595", "https://derflounder.files.wordpress.com/2018/07/screen-shot-2018-07-05-at-9-14-08-pm.png?w=595", "https://derflounder.files.wordpress.com/2018/07/screen-shot-2018-07-05-at-10-38-32-pm1.png?w=299&h=220", "https://www.thesun.co.uk/wp-content/uploads/2024/08/2024-martin-odegaard-arsenal-celebrates-926032710.jpg?strip=all&w=960", "https://www.thesun.co.uk/wp-content/uploads/2024/08/martin-odegaard-arsenal-premier-league-925852987.jpg?strip=all&w=703", "https://www.rappler.com/tachyon/2024/08/dindo-track-august-18-2024-11pm.png?fit=1024%2C1024", "https://www.thesun.co.uk/wp-content/uploads/2024/08/warning-taken-without-permission-i-925787029.jpg?strip=all&w=904", "https://www.thesun.co.uk/wp-content/uploads/2024/08/guys-i-m-proud-finally-891685655.jpg?strip=all&w=960", "https://i.etsystatic.com/19970584/r/il/93109a/6032426221/il_570xN.6032426221_3jas.jpg", "https://i.etsystatic.com/5225632/r/il/eb60d0/4874868937/il_570xN.4874868937_5j1w.jpg", "https://www.thesun.co.uk/wp-content/uploads/2024/08/449ba300-734f-4533-a6ea-b651e23d0da6.jpg?strip=all&w=960", "https://imgix.bustle.com/uploads/image/2022/11/15/6f0be778-1d72-43ad-b16e-4458c541dd3b-screen-shot-2022-11-15-at-13351-pm.png?w=500&fit=max", "https://www.thesun.co.uk/wp-content/uploads/2024/08/2024-celebrated-best-soap-operas-916849093.jpg?strip=all&w=639", "https://gangsterreport.com/wp-content/uploads/2018/09/asdf.jpg", "https://3.bp.blogspot.com/-OlbCbIJLXl4/V-NOGuPQcXI/AAAAAAAAFDQ/YfJavwgvBLsLjqvBcvSZ22AUWUEQnP_2wCLcB/s640/TG%2BLM%2BGrade%2B5.png", "https://2.bp.blogspot.com/-6nu3H3qADb8/UqTp6oPQ8HI/AAAAAAAAFAY/zPnHrSie6x4/s1600/NDEBELE+LESSONS+BUTTON.JPG", "https://1.bp.blogspot.com/-l8boxVVGBMo/XyjFAE66S7I/AAAAAAAACRY/-v89EM9DOqcfwiQ0NRbTzShjyZU_iknUACLcBGAsYHQ/w512-h288/Copy%2Bof%2BCopy%2Bof%2BCopy%2Bof%2Bwebinar%2Bon%2Bgoogle%2Bsuite%2Bfor%2Beducation%2B%25282%2529.png", "https://www.digitalkhabar.in/wp-content/uploads/शानदार-सुविचार.jpg", "https://c1.vgtstatic.com/thumb/1/8/188780-v1/dan-and-laura-dotsons-new-house-storage-wars-auctioneers.jpg", "https://3.bp.blogspot.com/-mZg8naiVPvE/XMxlDY-fSfI/AAAAAAAACaA/ZBLMtTP1BS8x2LcPpl2hUFw31RTASb0jgCLcBGAs/s320/Shradhanjali.jpg", "https://audioz.download/uploads/posts/2022-02/thumbs/1643811234_emu.png", "https://www.marathi-unlimited.in/wp-content/uploads/2018/12/नवनाथ-Navnath-Ki-Arti.jpg", "https://1.bp.blogspot.com/-yVhgsYPqQVk/WYMgQP_FC6I/AAAAAAAAYE0/AmrnWcRNYe4ootTnDSwInWUo-hnDf6vfwCLcBGAs/s640/kijo%2Bcover%2Bphoto.jpg", "https://3.bp.blogspot.com/-69WiCTFq0MI/WTTVJO-1L9I/AAAAAAABRRs/qfz9mt0HjJYZM1TInAFpbv3XpfVoLoqEgCLcB/s320/ts.png", "https://chrisukorg.files.wordpress.com/2015/02/perry.jpg?w=529&h=511", "https://assets.rappler.com/BFC459FEAD4148FFB4D3C69721F3CB7A/img/EB2A893552A14C398204CAE6B09F555D/roxas-launch-aquino-endorsement-20150731-008.jpg", "https://3.bp.blogspot.com/-FZ-dCCO9uAY/VNOdG8a3t-I/AAAAAAAADiA/B1ZuoVimhUc/s1600/mass-of-solute-mass-of-solvent.jpg", "https://1.bp.blogspot.com/-3DmnYeyqGg4/Va-yFsbEqhI/AAAAAAAAK8g/9djm1Frva7E/s640/the%2Bpool%2B1001%2Bhotel%2Bjakarta.jpg", "https://mycommunitysource.com/wp-content/uploads/2012/12/robert-stern-125x125.png", "https://i.imgur.com/V8Qf3KI.png", "https://www.inettutor.com/wp-content/uploads/2019/02/Online-Grading-System-with-Grade-Viewing-Conceptual-Framework.png", "https://ic.pics.livejournal.com/dark_flamenko/9208692/1673506/1673506_original.jpg", "https://busyteacher.org/uploads/posts/2015-10/thumbs/1445892197_ing-vs-to-infinitive.png", "https://a2.mzstatic.com/us/r30/Purple1/v4/08/25/56/0825565e-8100-b566-ee09-aa660e56f559/screen1136x1136.jpeg", "https://photos-a.propertyimages.ie/media/5/9/5/3035595/c8986a32-f6d4-4bf9-aeb1-4cfc4103f0b5_m.jpg", "https://augustacrime.com/wp-content/uploads/2024/07/hailey-hecker-18-of-warrenville-narcotics-possession-drug-offense-giving-false-info-identity-fraud-to-obtain-employment-or-avoid-detection-driving-under-suspension-150x150.jpg", "https://images.qvc.com/is/image/pic/co/Danjob.jpg", "https://1.bp.blogspot.com/-YOPGWJWdd3c/WToRtjhqKNI/AAAAAAABFDI/6P7-_HaxmtQlo643FAb5TLZCJx7dqX_dwCLcB/s1600/18893232_10155410415748501_7328552017510958888_n.jpg", "https://www.ksstradio.com/wp-content/uploads/2019/07/Trondamion-Andrzhel-Cleveland.jpg", "https://4.bp.blogspot.com/-qsPB2CTvMwo/WFaNEb3cNRI/AAAAAAABbbc/v2BMl79iVwgZk1MOMlZQTcKwEFt1hrM2gCLcB/s1600/%2524_57%2B%25282a%2529.JPG", "https://i.imgur.com/QfNCYCP.png", "https://assets.suredone.com/1517/media-pics/cp049425-rear-license-plate-holder-vw-golf-mk3-north-american-tub-tray-1hm-853-481-d.jpg", "https://4.bp.blogspot.com/-BaHEXZarDas/WJdhe2T6aSI/AAAAAAAANlE/vmZsTuSVv7QlMiEfaDPwU7Lkx4MJhoyQACLcB/s640/meenakshi%2Bjoshi.jpg", "https://www.mindef.gov.bn//Mindef%20topmenu%20pictures/Leadership-His%20Majesty/DEC%20%20photos/Pengarah%20DDWS%20Hjh%20Marliyana.jpg", "https://www.homesnacks.com/images/tn/brownsville-tn-0.jpg", "https://www.thesun.co.uk/wp-content/uploads/2024/08/martin-odegaard-arsenal-premier-league-925852987.jpg?strip=all&w=703", "https://www.thesun.co.uk/wp-content/uploads/2024/08/2024-martin-odegaard-arsenal-celebrates-926032710.jpg?strip=all&w=960", "https://www.rappler.com/tachyon/2024/08/dindo-track-august-18-2024-11pm.png?fit=1024%2C1024", "https://www.thesun.co.uk/wp-content/uploads/2024/08/warning-taken-without-permission-i-925787029.jpg?strip=all&w=904", "https://www.thesun.co.uk/wp-content/uploads/2024/08/guys-i-m-proud-finally-891685655.jpg?strip=all&w=960", "https://i.etsystatic.com/19970584/r/il/93109a/6032426221/il_570xN.6032426221_3jas.jpg", "https://i.etsystatic.com/5225632/r/il/eb60d0/4874868937/il_570xN.4874868937_5j1w.jpg", "https://www.thesun.co.uk/wp-content/uploads/2024/08/449ba300-734f-4533-a6ea-b651e23d0da6.jpg?strip=all&w=960", "https://imgix.bustle.com/uploads/image/2022/11/15/6f0be778-1d72-43ad-b16e-4458c541dd3b-screen-shot-2022-11-15-at-13351-pm.png?w=500&fit=max", "https://www.thesun.co.uk/wp-content/uploads/2024/08/2024-celebrated-best-soap-operas-916849093.jpg?strip=all&w=639" ]
[]
[]
[ "" ]
null
[]
null
en
//www.rssing.com/favicon.ico
null
About two weeks ago, I noticed I had an SSL error cropping up with one of my AutoPkg recipes: [Errno socket error] EOF occurred in violation of protocol (_ssl.c:590) When I investigated what it meant, I wound up at this lengthy issue opened for Python’s requests module. In the end, it seemed to boil down to four issues: I was running AutoPkg on macOS Sierra 10.12.6. The recipe I was running used a processor which called Python’s urllib2 library. Python’s urllib2 library was calling the OS’s installed version of OpenSSL to connect to a server using TLSv1.2 . The version of OpenSSL included with 10.12.6 does not support TLSv1.2 for the urllib2 library. When I looked into the situation on macOS High Sierra 10.13.5, Apple had addressed the problem by replacing OpenSSL with LibreSSL. Among other improvements, LibreSSL allowed Python’s urllib2 library to be able to connect to servers using TLSv1.2. Problem solved! Until I ran into another problem. I had been using AutoPkgr as my way of managing AutoPkg and scheduling AutoPkg runs. However, when I set up AutoPkgr on a 10.13.5 VM and scheduled my AutoPkg nightly run, nothing happened except my CPU spiked to 100% and AutoPkgr locked up with the pinwheel of patience. OK, maybe it was something with my VM. No problem, set up a new macOS 10.13.5 VM. Same problem. Maybe it was because I was trying to run the VM on VMware’s ESXi? Set up a new VM running in VMware Fusion. Same problem. Maybe AutoPkgr was getting confused by Apple File System? I set up a 10.13.5 VM which used an HFS+ boot volume. Same problem, replicated on both ESXi and Fusion. No matter what I tried, trying to run recipes using AutoPkgr on macOS 10.13.x resulted in the following: The VM’s CPU spiking to 100% AutoPkgr locking up with the pinwheel of patience My AutoPkg recipes not running I was able to eliminate AutoPkg itself as being the issue, as running recipes from the command line using AutoPkg worked fine. With that information in mind, I decided to see if I could replicate what I most liked about using AutoPkgr into another form. In the end, my needs boiled down to three: I wanted to be able to run a list of AutoPkg recipes on a scheduled basis. These recipes would be .jss recipes for uploading to a Jamf Pro server. I wanted to be able to post information about those AutoPkg recipes to a Slack channel I wanted all the error messages from an AutoPkg run, but I didn’t care about all the information that came from a successful AutoPkg run. With that, I decided to draw on some earlier work done by Sean Kaiser, a colleague who had written a script for managing AutoPkg in the pre-AutoPkgr days. For more details, please see below the jump. Sean’s solution relies on a script and LaunchDaemon running on a Mac, where it runs hourly and is set up to only send him emails if the AutoPkg logs are different from previous runs. The email notifications are a diff against the previous logs, so only the true differences get sent. For those interested, Sean’s script is available from here: https://github.com/seankaiser/automation-scripts/tree/master/autopkg I was more focused on a once-daily run, so I didn’t want to use the diff methodology. After some more research, I found that my colleague Graham Pugh had written pretty much exactly what I needed: An AutoPkg post-processor named Slacker which could be used with an AutoPkg recipe list of .jss recipes to post the results to a Slack channel. I forked a copy of the Slacker post-processor and (with Graham’s help) made some edits to it to have the output appear exactly the way I wanted it to. New package message: No new package message: Along with the Slacker post-processor, I also found a script for sending multiline output to a Slack channel. This would allow me to send the complete error log from an AutoPkg run to a specified Slack webhook. Using all of this, I wrote a script named autopkg-conductor which is designed to do the following: 1. Detect a list of AutoPkg recipes at a defined location and verify that the list is readable. 2. If the AutoPkg recipe list is readable and available, run the following actions: A. Verify that AutoPkg is installed. B. Update all available AutoPkg repos with the latest recipes. C. Run the AutoPkg recipes in the list. The AutoPkg run has all actions logged to ~/Library/Logs, with the logfiles being named autopkg-run-for- followed by the date. If the optional slack_post_processor and slack_webhook variables are both populated, any AutoPkg .jss recipes should have their output sent to the Slack webhook specified in the slack_webhook variable. If only the slack_webhook variable is populated, all output from the AutoPkg run is sent to the Slack channel. No filtering is applied, everything is sent. If neither the slack_post_processor or slack_webhook variables are populated, no information is sent to Slack. All AutoPkg run information will be in the logs stored in ~/Library Logs. For scheduled runs, I recommend the following: Set up a user account named autopkg to run AutoPkg in. Copy the autopkg-conductor script to /usr/local/bin/autopkg-conductor.sh and set the autopkg-conductor.sh script to be executable. Set up a LaunchDaemon to run /usr/local/bin/autopkg-conductor.sh at a pre-determined time or interval. For this example, the LaunchDaemon shown below will run /usr/local/bin/autopkg-conductor.sh as the autopkg user once a day at 2:00 AM. The autopkg-conductor script is available below. It’s also available from GitHub using the following link:
8585
dbpedia
2
31
https://www.jetbrains.com/help/pycharm/creating-and-optimizing-imports.html
en
Auto import | PyCharm
https://resources.jetbra…meta/preview.png
https://resources.jetbra…meta/preview.png
[ "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.quickfixBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.more.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.codeInsight.intentionBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_optimize_imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.general.settings.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/py_optimize_imports_before_commit.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_reformat-file-dialog.png", "https://resources.jetbrains.com/help/img/idea/2024.2/python_import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/python_import1.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_style1.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_style2.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_inspection.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_convert_imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_fix_relative.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_relative_absolute_imports_intention.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_auto_completion.png", "https://resources.jetbrains.com/help/img/idea/2024.2/ws_es6_auto-import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/ws_es6_auto-import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_ignore_missing_import.png" ]
[]
[]
[ "" ]
null
[]
null
Basic procedures to create and optimize imports in PyCharm. Learn more how to import the missing import or XML namespace.
en
https://jetbrains.com/ap…e-touch-icon.png
PyCharm Help
https://www.jetbrains.com/help/pycharm/creating-and-optimizing-imports.html
Auto import When you reference a class that has not been imported, PyCharm helps you locate this file and add it to the list of imports. You can import a single class or an entire package, depending on your settings. The import statement is added to the imports section, but the caret does not move from the current position, and your current editing session does not suspend. This feature is known as the Import Assistant. Using Import Assistant is the preferred way to handle imports in PyCharm because import optimizations are not supported via command line. The same possibility applies to XML files. When you type a tag with an unbound namespace, the import assistant suggests creating a namespace and offers a list of appropriate choices. Automatically add import statements You can configure the IDE to automatically add import statements if there are no options to choose from. Press Ctrl+Alt+S to open settings and then select Editor | General | Auto Import. In the Python section, configure automatic imports: Select Show import popup to automatically display an import popup when tying the name of a class that lacks an import statement. Select one of the Preferred import style options to define the way an import statement to be generated. When tooltips are disabled, unresolved references are underlined and marked with the red bulb icon . To view the list of suggestions, click this icon (or press Alt+Enter) and select Import class. Disable all tooltips Hover over the inspection widget in the top-right corner of the editor, click , and disable the Show Auto-Import Tooltip option. Disable auto import If you want to completely disable auto-import, make sure that: All import tooltips are disabled. The automatic insertion of import statements is disabled. Optimize imports The Optimize Imports feature helps you remove unused imports and organize import statements in the current file or in all files in a directory at once according to the rules specified in Settings | Editor | Code Style | <language> | Imports. Optimize all imports Select a file or a directory in the Project tool window (View | Tool Windows | Project). Do any of the following: In the main menu, go to Code | Optimize Imports (or press Ctrl+Alt+O). From the context menu, select Optimize Imports. (If you've selected a directory) Choose whether you want to optimize imports in all files in the directory, or only in locally modified files (if your project is under version control), and click Run. Optimize imports in a single file Place the caret at the import statement and press Alt+Enter or use the icon. Select Optimize imports. Optimize imports when committing changes to Git If your project is under version control, you can instruct PyCharm to optimize imports in modified files before committing them to VCS. Press Ctrl+K or select Git | Commit from the main menu. Click and in the commit message area, select the Optimize imports checkbox. Automatically optimize imports on save You can configure the IDE to optimize imports in modified files automatically when your changes are saved. Press Ctrl+Alt+S to open settings and then select Tools | Actions on Save. Enable the Optimize imports option. Additionally, from the All file types list, select the types of files in which you want to optimize imports. Apply the changes and close the dialog. Optimize imports when reformatting a file You can tell PyCharm to optimize imports in a file every time it is reformatted. Open the file in the editor, press Ctrl+Alt+Shift+L, and make sure the Optimize imports checkbox is selected in the Reformat File dialog that opens. After that every time you press Ctrl+Alt+L in this project, PyCharm will optimize its imports automatically. Creating imports on the fly Import packages on-the-fly Start typing a name in the editor. If the name references a class that has not been imported, the following prompt appears: The unresolved references will be underlined, and you will have to invoke intention action Add import explicitly. Press Alt+Enter. If there are multiple choices, select the desired import from the list. You can define your preferred import style for Python code by using the following options available on the Auto Import page of the project settings (Settings | Editor | General | Auto Import): from <module> import <name> import <module>.<name> Toggling relative and absolute imports PyCharm helps you organize relative and absolute imports within a source root. With the specific intention, you can convert absolute imports into relative and relative imports into absolute. If your code contains any relative import statement, PyCharm will add relative imports when fixing the missing imports. Note that relative imports work only within the current source root: you cannot relatively import a package from another source root. The intentions prompting you to convert imports are enabled by default. To disable them, open project Settings (Ctrl+Alt+S), select Editor | Intentions, and deselect the Convert absolute import to relative and Convert relative import to absolute. When you complete a ES6 symbol or a CommonJS module, PyCharm either decides on the style of the import statement itself or displays a popup where you can choose the style you need. Learn more from Auto-import in JavaScript. Last modified: 28 June 2024
8585
dbpedia
1
72
http://indy.cs.concordia.ca/auto/
en
AUTO
[]
[]
[]
[ "" ]
null
[ "Pankaj Kamthan" ]
null
null
SOFTWARE FOR CONTINUATION AND BIFURCATION PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS This is the Home Page of the AUTO Web Site, established in January 1996. ANNOUNCEMENTS [November 30, 2019] Version 0.8 of AUTO-07p is available at GitHub. [January 1, 2011] Version 0.8 of AUTO-07p is available at SourceForge. INTRODUCTION AUTO is a software for continuation and bifurcation problems in ordinary differential equations, originally developed by Eusebius Doedel, with subsequent major contribution by several people, including Alan Champneys, Fabio Dercole, Thomas Fairgrieve, Yuri Kuznetsov, Bart Oldeman, Randy Paffenroth, Bjorn Sandstede, Xianjun Wang, and Chenghai Zhang. AUTO can do a limited bifurcation analysis of algebraic systems of the form f(u,p) = 0, f,u in Rn and of systems of ordinary differential equations of the form u'(t) = f(u(t),p), f,u in Rn subject to initial conditions, boundary conditions, and integral constraints. Here p denotes one or more parameters. AUTO can also do certain continuation and evolution computations for parabolic PDEs. It also includes the software HOMCONT for the bifurcation analysis of homoclinic orbits. AUTO is quite fast and can benefit from multiple processors; therefore it is applicable to rather large systems of differential equations. For further information and details, see the AUTO Documentation. AUTO STATUS/EVOLUTION The following table represents the historical evolution in the development of AUTO in a chronological order. AUTO AVAILABILITY/DISTRIBUTION The AUTO package is available for UNIX/Linux-based computers. AUTO-07P AUTO-07p is the successor to both AUTO97 and AUTO2000. It includes new plotting utilities, namely PyPlaut and Plaut04. It also contains many of the features of AUTO2000, including the Python CLUI, some parallelization, dynamic memory allocation, and the ability to use user equation files written in C. The overall performance has improved, especially for systems where the Jacobian matrix is sparse. AUTO-07p is written in Fortran. At least a Fortran 90 compiler is required to compile AUTO-07p. One such compiler is the freely downloadable GNU Fortran 95 compiler (gfortran). Gfortran ships with most current Linux distributions. Distribution: Download at GitHub. AUTO DOCUMENTATION The AUTO distribution include a copy of the AUTO Manual in LATEX, PostScript, and Portable Document Format (PDF). AUTO APPLICATIONS AUTO has been used in many scientific and engineering applications. A sample of applications can be found by searching on the Web for "bifurcation software AUTO". RELATED SOFTWARE Other software directly or indirectly related to AUTO includes DSTool, PyDSTool, XPPAUT, Content, MatCont, and DDE-BifTool. RELATED LECTURE NOTES Lecture Notes on Numerical Analysis of Nonlinear Equations. By Eusebius Doedel. Last Modified: Spring 2010. CONTACT/FEEDBACK If you have any comments, questions, or suggestions, please let us know by mailing "doedel at cse dot concordia dot ca" with "Subject: AUTO Related." An enquiry should include full name and affiliation.
8585
dbpedia
3
84
https://docs.yoctoproject.org/2.4.1/ref-manual/ref-manual.html
en
Yocto Project Reference Manual
[ "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/YP-flow-diagram.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/building-an-image.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/buildhistory.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/buildhistory-web.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/git-workflow.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/source-repos.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/index-downloads.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/yp-download.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/yocto-environment-ref.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/user-configuration.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/layer-input.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/source-input.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/package-feeds.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/source-fetching.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/patching.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/configuration-compile-autoreconf.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/analysis-for-package-splitting.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/image-generation.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/sdk-generation.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/images.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/sdk.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/cross-development-toolchains.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/build-workspace-directory.png" ]
[]
[]
[ "" ]
null
[ "Scott Rifenbark" ]
null
null
Class files are used to abstract common functionality and share it amongst multiple recipe (.bb) files. To use a class file, you simply make sure the recipe inherits the class. In most cases, when a recipe inherits a class it is enough to enable its features. There are cases, however, where in the recipe you might need to set variables or override some default behavior. Any Metadata usually found in a recipe can also be placed in a class file. Class files are identified by the extension .bbclass and are usually placed in a classes/ directory beneath the meta*/ directory found in the Source Directory. Class files can also be pointed to by BUILDDIR (e.g. build/) in the same way as .conf files in the conf directory. Class files are searched for in BBPATH using the same method by which .conf files are searched.
8585
dbpedia
2
27
https://www.kali.org/tools/what-is-python/
en
Kali Linux Tools
https://www.kali.org/ima…es/kali-logo.svg
https://www.kali.org/ima…es/kali-logo.svg
[ "https://www.kali.org/images/kali-tools-icon-missing.svg" ]
[]
[]
[ "kali", "linux", "kalilinux", "Penetration", "Testing", "Penetration Testing", "Distribution", "Advanced" ]
null
[]
2024-05-23T00:00:00+00:00
en
https://www.kali.org/images/favicon.png
Kali Linux
https://www.kali.org/tools/what-is-python/
python-dev-is-python3 Starting with the Debian 11 (bullseye) and Ubuntu 20.04 LTS (focal) releases, all python packages use explicit python3 or python2 interpreter and do not use unversioned /usr/bin/python-config at all. Some third-party code is now predominantly python3 based, yet may use /usr/bin/python-config. This is a convenience package which ships a symlink to point /usr/bin/python-config script at the current default python3. It may improve compatibility with other modern systems, whilst breaking some obsolete or 3rd-party software. No packages may declare dependencies on this package. Installed size: 13 KB How to install: sudo apt install python-dev-is-python3 Dependencies: python-is-python3 python3-dev pdb The Python debugger root@kali:~# pdb -h usage: pdb.py [-c command] ... [-m module | pyfile] [arg] ... Debug the Python program given by pyfile. Alternatively, an executable module or package to debug can be specified using the -m switch. Initial commands are read from .pdbrc files in your home directory and in the current directory, if they exist. Commands supplied with -c are executed after commands from .pdbrc files. To let the script run until an exception occurs, use "-c continue". To let the script run up to a given line X in the debugged file, use "-c 'until X'". python-config Output build options for python C/C++ extensions or embedding root@kali:~# python-config --help Usage: /usr/bin/python-config --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir|--embed python-is-python3 Starting with the Debian 11 (bullseye) and Ubuntu 20.04 LTS (focal) releases, all python packages use explicit python3 or python2 interpreter and do not use unversioned /usr/bin/python at all. Some third-party code is now predominantly python3 based, yet may use /usr/bin/python. This is a convenience package which ships a symlink to point the /usr/bin/python interpreter at the current default python3. It may improve compatibility with other modern systems, whilst breaking some obsolete or 3rd-party software. No packages may declare dependencies on this package. Installed size: 15 KB How to install: sudo apt install python-is-python3 Dependencies: python3 pydoc The Python documentation tool root@kali:~# pydoc -h pydoc - the Python documentation tool pydoc <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '/', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc -n <hostname> Start an HTTP server with the given hostname (default: localhost). pydoc -p <port> Start an HTTP server on the given port on the local machine. Port number 0 can be used to get an arbitrary unused port. pydoc -b Start an HTTP server on an arbitrary unused port and open a web browser to interactively browse documentation. This option can be used in combination with -n and/or -p. pydoc -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '/', it is treated as a filename; if it names a directory, documentation is written for all the contents. python An interpreted, interactive, object-oriented programming language
8585
dbpedia
1
25
https://derflounder.wordpress.com/2018/07/06/automating-autopkg-runs-with-autopkg-conductor/
en
Automating AutoPkg runs with autopkg-conductor
https://derflounder.word…kage_message.png
https://derflounder.word…kage_message.png
[ "https://derflounder.wordpress.com/wp-content/uploads/2018/07/autopkg_new_package_message.png?w=298&h=126", "https://derflounder.wordpress.com/wp-content/uploads/2018/07/autopkg_no_new_package_message.png?w=299&h=115", "https://derflounder.wordpress.com/wp-content/uploads/2018/07/screen-shot-2018-07-05-at-10-38-32-pm.png?w=299&h=220", "https://derflounder.wordpress.com/wp-content/uploads/2018/07/screen-shot-2018-07-05-at-10-13-10-pm.png?w=595", "https://derflounder.wordpress.com/wp-content/uploads/2018/07/screen-shot-2018-07-05-at-9-14-08-pm.png?w=595", "https://derflounder.wordpress.com/wp-content/uploads/2018/07/screen-shot-2018-07-05-at-10-38-32-pm1.png?w=299&h=220", "https://i0.wp.com/lh5.googleusercontent.com/--II1FaafOT8/AAAAAAAAAAI/AAAAAAAAAIw/D7uH7UUBjxM/photo.jpg?resize=32%2C32&ssl=1", "https://2.gravatar.com/avatar/eb7a55fd73bb493f9c58541095f0fce21122e00109080c7a1e5813f6ef1b31fe?s=32&d=identicon&r=G", "https://1.gravatar.com/avatar/d678374fabfd2ce5e42a8d2ee219c878fe28d4d27ba3bdfe0905bcdd49a78f9f?s=48&d=identicon&r=G", "https://0.gravatar.com/avatar/9a6eb242728c9344e6078f49f7297e7bbe7b5c5af0b3f99952f35686499ef79c?s=48&d=identicon&r=G", "https://0.gravatar.com/avatar/9851bc7e13a6a30c801e72cd65e1fcc49818a778abfbfc923093a7ae8d60564a?s=48&d=identicon&r=G", "https://1.gravatar.com/avatar/d01b71732017a03705b60dcd6ba6669a9b5148633fa12b8ae7531c3143604cc9?s=48&d=identicon&r=G", "https://1.gravatar.com/avatar/da3a0520ed1bfc83e1f3baa3c3947cf7f0ebb511790f996d7eabad8310adcdb1?s=48&d=identicon&r=G", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[ "Kevin Strick" ]
2018-07-06T00:00:00
About two weeks ago, I noticed I had an SSL error cropping up with one of my AutoPkg recipes: When I investigated what it meant, I wound up at this lengthy issue opened for Python's requests module. In the end, it seemed to boil down to four issues: I was running AutoPkg on macOS Sierra 10.12.6.…
en
https://s1.wp.com/i/favicon.ico
Der Flounder
https://derflounder.wordpress.com/2018/07/06/automating-autopkg-runs-with-autopkg-conductor/
About two weeks ago, I noticed I had an SSL error cropping up with one of my AutoPkg recipes: [Errno socket error] EOF occurred in violation of protocol (_ssl.c:590) When I investigated what it meant, I wound up at this lengthy issue opened for Python’s requests module. In the end, it seemed to boil down to four issues: I was running AutoPkg on macOS Sierra 10.12.6. The recipe I was running used a processor which called Python’s urllib2 library. Python’s urllib2 library was calling the OS’s installed version of OpenSSL to connect to a server using TLSv1.2 . The version of OpenSSL included with 10.12.6 does not support TLSv1.2 for the urllib2 library. When I looked into the situation on macOS High Sierra 10.13.5, Apple had addressed the problem by replacing OpenSSL with LibreSSL. Among other improvements, LibreSSL allowed Python’s urllib2 library to be able to connect to servers using TLSv1.2. Problem solved! Until I ran into another problem. I had been using AutoPkgr as my way of managing AutoPkg and scheduling AutoPkg runs. However, when I set up AutoPkgr on a 10.13.5 VM and scheduled my AutoPkg nightly run, nothing happened except my CPU spiked to 100% and AutoPkgr locked up with the pinwheel of patience. OK, maybe it was something with my VM. No problem, set up a new macOS 10.13.5 VM. Same problem. Maybe it was because I was trying to run the VM on VMware’s ESXi? Set up a new VM running in VMware Fusion. Same problem. Maybe AutoPkgr was getting confused by Apple File System? I set up a 10.13.5 VM which used an HFS+ boot volume. Same problem, replicated on both ESXi and Fusion. No matter what I tried, trying to run recipes using AutoPkgr on macOS 10.13.x resulted in the following: The VM’s CPU spiking to 100% AutoPkgr locking up with the pinwheel of patience My AutoPkg recipes not running I was able to eliminate AutoPkg itself as being the issue, as running recipes from the command line using AutoPkg worked fine. With that information in mind, I decided to see if I could replicate what I most liked about using AutoPkgr into another form. In the end, my needs boiled down to three: I wanted to be able to run a list of AutoPkg recipes on a scheduled basis. These recipes would be .jss recipes for uploading to a Jamf Pro server. I wanted to be able to post information about those AutoPkg recipes to a Slack channel I wanted all the error messages from an AutoPkg run, but I didn’t care about all the information that came from a successful AutoPkg run. With that, I decided to draw on some earlier work done by Sean Kaiser, a colleague who had written a script for managing AutoPkg in the pre-AutoPkgr days. For more details, please see below the jump. Sean’s solution relies on a script and LaunchDaemon running on a Mac, where it runs hourly and is set up to only send him emails if the AutoPkg logs are different from previous runs. The email notifications are a diff against the previous logs, so only the true differences get sent. For those interested, Sean’s script is available from here: https://github.com/seankaiser/automation-scripts/tree/master/autopkg I was more focused on a once-daily run, so I didn’t want to use the diff methodology. After some more research, I found that my colleague Graham Pugh had written pretty much exactly what I needed: An AutoPkg post-processor named Slacker which could be used with an AutoPkg recipe list of .jss recipes to post the results to a Slack channel. I forked a copy of the Slacker post-processor and (with Graham’s help) made some edits to it to have the output appear exactly the way I wanted it to. New package message: No new package message: Along with the Slacker post-processor, I also found a script for sending multiline output to a Slack channel. This would allow me to send the complete error log from an AutoPkg run to a specified Slack webhook. Using all of this, I wrote a script named autopkg-conductor which is designed to do the following: 1. Detect a list of AutoPkg recipes at a defined location and verify that the list is readable. 2. If the AutoPkg recipe list is readable and available, run the following actions: A. Verify that AutoPkg is installed. B. Update all available AutoPkg repos with the latest recipes. C. Run the AutoPkg recipes in the list. The AutoPkg run has all actions logged to ~/Library/Logs, with the logfiles being named autopkg-run-for- followed by the date. If the optional slack_post_processor and slack_webhook variables are both populated, any AutoPkg .jss recipes should have their output sent to the Slack webhook specified in the slack_webhook variable. If only the slack_webhook variable is populated, all output from the AutoPkg run is sent to the Slack channel. No filtering is applied, everything is sent. If neither the slack_post_processor or slack_webhook variables are populated, no information is sent to Slack. All AutoPkg run information will be in the logs stored in ~/Library Logs. For scheduled runs, I recommend the following: Set up a user account named autopkg to run AutoPkg in. Copy the autopkg-conductor script to /usr/local/bin/autopkg-conductor.sh and set the autopkg-conductor.sh script to be executable. Set up a LaunchDaemon to run /usr/local/bin/autopkg-conductor.sh at a pre-determined time or interval. For this example, the LaunchDaemon shown below will run /usr/local/bin/autopkg-conductor.sh as the autopkg user once a day at 2:00 AM. The autopkg-conductor script is available below. It’s also available from GitHub using the following link:
8585
dbpedia
0
86
https://askubuntu.com/questions/236202/how-do-i-contribute-an-autopkg-test-to-ubuntu
en
How do I contribute an autopkg test to ubuntu?
https://cdn.sstatic.net/…g?v=c492c9229955
https://cdn.sstatic.net/…g?v=c492c9229955
[ "https://cdn.sstatic.net/Sites/askubuntu/Img/logo.svg?v=472cf2768bba", "https://www.gravatar.com/avatar/d9c4a52eaeaf2d863810d1ba89ac806d?s=64&d=identicon&r=PG", "https://www.gravatar.com/avatar/4c810a4e5b80c311b3916e6f933b8387?s=64&d=identicon&r=PG", "https://www.gravatar.com/avatar/4c810a4e5b80c311b3916e6f933b8387?s=64&d=identicon&r=PG", "https://askubuntu.com/posts/236202/ivc/3f38?prg=8104b01a-d8c2-48c9-af08-ce3ac1072016" ]
[]
[]
[ "" ]
null
[]
2013-01-04T17:07:55
How do I contribute an autopkg test for a ubuntu package?
en
https://cdn.sstatic.net/Sites/askubuntu/Img/favicon.ico?v=928dfb7c1990
Ask Ubuntu
https://askubuntu.com/questions/236202/how-do-i-contribute-an-autopkg-test-to-ubuntu
Autopkg tests can be written for any ubuntu package. The tests follow the DEP 8 specification for including tests as part of a deb package. Writing a test A test can be written in a myriad of languages. Common examples are C, bash, python and perl. To write a test: Branch the package bzr branch ubuntu: Add a source section in debian/control called XS-Testsuite: autopkgtest Add the tests to debian/tests/ folder Add a debian/tests/control which specifies the requirements for the testbed. For example: Tests: build Depends: build-essential Contributing tests Getting the test into ubuntu follows the normal ubuntu developer process. In short, you Branch the source of the package you wish to add a test Edit the debian/control and debian/tests/control file to enable the tests Add the test(s) to debian/tests folder Commit your changes and propose a merge More information To see a list of current autopkgtests, you can see the live jenkins output of all the tests that are currently being automatically run here.
8585
dbpedia
0
87
https://robbmann.io/posts/emacs-treesit-auto/
en
Getting Emacs 29 to Automatically Use Tree-sitter Modes
https://robbmann.io/favicon-32x32.png
https://robbmann.io/favicon-32x32.png
[ "https://robbmann.io/img/robb_python_grey_huf4e52b91f6345de53e62f8c2a64f08ae_471021_192x192_fill_box_smart1_3.png" ]
[]
[]
[ "" ]
null
[ "Robert Enzmann" ]
2023-01-22T00:00:00-05:00
It's Robb, man!
en
/apple-touch-icon.png
robbmann
https://robbmann.io/posts/emacs-treesit-auto/
Recently, /u/casouri posted a guide to getting started with the new built-in tree-sitter capabilities for Emacs 29. In that post, they mention that there will be no automatic major-mode fallback for Emacs 29. That means I would have to use M-x python-ts-mode manually, or change the entry in auto-mode-alist to use python-ts-mode, in order to take advantage of the new tree-sitter functionality. Of course, that would still leave the problem of when the Python tree-sitter grammar isn’t installed, in which case python-ts-mode is going to fail. To solve this issue, I wrote a very small package that adjusts the new major-mode-remap-alist variable based on what grammars are ready on your machine. If a language’s tree-sitter grammar is installed, it will use that mode. If not, it will use the original major mode. Simple as that! For the impatient: treesit-auto.el # The package I wound up with is available on GitHub and MELPA as treesit-auto.el. So long as MELPA is on your package-archives list like this: Then you can use M-x package-refresh-contents followed by M-x package-install RET treesit-auto. If you also like having a local copy of the git repository itself, then package-vc-install is a better fit: Then, in your configuration file: See the README on GitHub for all the goodies you can put in the :config block. Origins of treesit-auto.el # The recommendation in Yuan’s article was to use define-derived-mode along with treesit-ready-p. In the NEWS (C-h n), however, I noticed a new variable major-mode-remap-alist, which at a glance appears suitable for a similar cause. For my Emacs configuration, I had two things I wanted to accomplish: Set all of the URLs for treesit-language-source-alist up front, so that I need only use treesit-install-language-grammar RET python RET, instead of writing out everything interactively Use the same list of available grammars to remap between tree-sitter modes and their default fallbacks Initially, I tried Yuan’s suggested approach with define-derived-mode, but I didn’t want to repeat code for every major mode I wanted fallback for. Trying to expand the major mode names correctly in a loop wound up unwieldy, because expanding the names properly for the define-derived-mode macro was too challenging for my current skill level with Emacs lisp, and wound up cluttering the global namespace more than I liked when auto-completing through M-x. Instead, I decided take a two step approach: Set up treesit-language-source-alist with the grammars I’ll probably use Loop over the keys in this alist to define the association between a tree-sitter mode and its default fallback through major-mode-remap-alist This makes the code we need to actually write a little simpler, since an association like python-mode to python-ts-mode can be automatic (since they share a name), and we can use a customizable alist for specifying the edge cases, such as toml-ts-mode falling back to conf-toml-mode. To start with, I just had this: At this point, I can just use M-x treesit-install-language-grammar RET bash to get the Bash grammar, and similarly for other languages. Then, I made an alist of the “weird” cases: Setting the CDR to nil explicitly means I didn’t want any type of fallback to be attempted whatsoever for a given tree-sitter mode, even if something similarly named might be installed. Finally, I had a simple loop where I constructed the symbols for the mode and the tree-sitter mode via intern and concat, and check whether the tree-sitter version is available through treesit-ready-p. If it is, we remap the base mode to the tree-sitter one in major-mode-remap-alist. If it isn’t ready, then we do the opposite: remap the tree-sitter mode to the base version.
8585
dbpedia
2
26
https://realpython.com/python-all-attribute/
en
: Packages, Modules, and Wildcard Imports – Real Python
https://files.realpython…698d61e0300d.jpg
https://files.realpython…698d61e0300d.jpg
[ "https://realpython.com/static/real-python-logo.893c30edea53.svg", "https://realpython.com/static/pytrick-dict-merge.4201a0125a5e.png", "https://files.realpython.com/media/Pythons-__all__-Set-Up-Your-Packages-and-Modules-for-Wildcard-Imports_Watermarked.698d61e0300d.jpg", "https://realpython.com/static/pytrick-dict-merge.4201a0125a5e.png", "https://realpython.com/cdn-cgi/image/width=862,height=862,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/Perfil_final1.9f896bc212f6.jpg", "https://realpython.com/cdn-cgi/image/width=862,height=862,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/Perfil_final1.9f896bc212f6.jpg", "https://realpython.com/cdn-cgi/image/width=959,height=959,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/PP.9b8b026f75b8.jpg", "https://realpython.com/cdn-cgi/image/width=800,height=800,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/gahjelle.470149ee709e.jpg", "https://realpython.com/cdn-cgi/image/width=400,height=400,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/VZxEtUor_400x400.7169c68e3950.jpg", "https://realpython.com/cdn-cgi/image/width=456,height=456,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/martin_breuss_python_square.efb2b07faf9f.jpg", "https://realpython.com/static/videos/lesson-locked.f5105cfd26db.svg", "https://realpython.com/static/videos/lesson-locked.f5105cfd26db.svg", "https://realpython.com/static/videos/lesson-locked.f5105cfd26db.svg", "https://files.realpython.com/media/Pythons-__all__-Set-Up-Your-Packages-and-Modules-for-Wildcard-Imports_Watermarked.698d61e0300d.jpg" ]
[]
[]
[ "" ]
null
[ "Real Python" ]
2024-03-04T14:00:00+00:00
In this tutorial, you'll learn about wildcard imports and the __all__ variable in Python. With __all__, you can prepare your packages and modules for wildcard imports, which are a quick way to import everything.
en
/static/favicon.68cbf4197b0c.png
https://realpython.com/python-all-attribute/
Importing Objects in Python When creating a Python project or application, you’ll need a way to access code from the standard library or third-party libraries. You’ll also need to access your own code from the multiple files that may make up your project. Python’s import system is the mechanism that allows you to do this. The import system lets you get objects in different ways. You can use: Explicit imports Wildcard imports In the following sections, you’ll learn the basics of both strategies. You’ll learn about the different syntax that you can use in each case and the result of running an import statement. Explicit Imports In Python, when you need to get a specific object from a module or a particular module from a package, you can use an explicit import statement. This type of statement allows you to bring the target object to your current namespace so that you can use the object in your code. To import a module by its name, you can use the following syntax: Python import module [as name] Copied! This statement allows you to import a module by its name. The module must be listed in Python’s import path, which is a list of locations where the path based finder searches when you run an import. The part of the syntax that’s enclosed in square brackets is optional and allows you to create an alias of the imported name. This practice can help you avoid name collisions in your code. As an example, say that you have the following module: Python calculations.py def add(a, b): return float(a + b) def subtract(a, b): return float(a - b) def multiply(a, b): return float(a * b) def divide(a, b): return float(a / b) Copied! This sample module provides functions that allow you to perform basic calculations. The containing module is called calculations.py. To import this module and use the functions in your code, go ahead and start a REPL session in the same directory where you saved the file. Then run the following code: Python >>> import calculations >>> calculations.add(2, 4) 6.0 >>> calculations.subtract(8, 4) 4.0 >>> calculations.multiply(5, 2) 10.0 >>> calculations.divide(12, 2) 6.0 Copied! The import statement at the beginning of this code snippet brings the module name to your current namespace. To use the functions or any other object from calculations, you need to use fully qualified names with the dot notation. Note: You can create an alias of calculations using the following syntax: Python import calculations as calc Copied! This practice allows you to avoid name clashes in your code. In some contexts, it’s also common practice to reduce the number of characters to type when using qualified names. For example, if you’re familiar with libraries like NumPy and pandas, then you’ll know that it’s common to use the following imports: Python import numpy as np import pandas as pd Copied! Using shorter aliases when you import modules facilitates using their content by taking advantage of qualified names. You can also use a similar syntax to import a Python package: Python import package [as name] Copied! In this case, Python loads the content of your package’s __init__.py file into your current namespace. If that file exports objects, then those objects will be available to you. Finally, if you want to be more specific in what you import into your current namespace, then you can use the following syntax: Python from module import name [as name] Copied! With this import statement, you can import specific names from a given module. This approach is recommended when you only need a few names from a long module that defines many objects or when you don’t expect name collisions in your code. To continue with the calculations module, you can import the needed function only: Python >>> from calculations import add >>> add(2, 4) 6.0 Copied! In this example, you only use the add() function. The from module import name syntax lets you import the target name explicitly. In this case, the rest of the functions and the module itself won’t be accessible in your namespace or scope. Wildcard Imports on Modules When you’re working with Python modules, a wildcard import is a type of import that allows you to get all the public names from a module in one go. This type of import has the following syntax: Python from module import * Copied! The name wildcard import derives from the asterisk at the end of the statement, which denotes that you want to import all the objects from module. Go back to your terminal window and restart your REPL session. Then, run the following code: Python >>> from calculations import * >>> dir() [ ... 'add', 'divide', 'multiply', 'subtract' ] Copied! In this code snippet, you first run a wildcard import. This import makes available all the names from the calculations modules and brings them to your current namespace. The built-in dir() function allows you to see what names are available in the current namespace. As you can confirm from the output, all the functions that live in calculations are now available. When you’re completely sure that you need all the objects that a given module defines, using a wildcard import is a quick solution. In practice, this situation is rare, and you just end up cluttering your namespace with unneeded objects and names. Using wildcard imports is explicitly discouraged in PEP 8 when they say: Wildcard imports (from <module> import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools. There is one defensible use case for a wildcard import, which is to republish an internal interface as part of a public API (for example, overwriting a pure Python implementation of an interface with the definitions from an optional accelerator module and exactly which definitions will be overwritten isn’t known in advance). (Source) The main drawback of wildcard import is that you don’t have control over the imported objects. You can’t be specific. Therefore, you can confuse the users of your code and clutter their namespace with unnecessary objects. Even though wildcard imports are discouraged, some libraries and tools use them. For example, if you search for applications built with Tkinter, then you’ll realize that many of the examples use the form: Python from tkinter import * Copied! This import gives you access to all the objects defined in the tkinter module, which is pretty convenient if you’re starting to learn how to use this tool. You may find many other tools and third-party libraries that use wildcard imports for code examples in their documentation, and that’s okay. However, in real-world projects, you should avoid this type of import. In practice, you can’t control how the users of your code will manage their imports. So, you better prepare your code for wildcard imports. You’ll learn how to do this in the upcoming sections. First, you’ll learn about using wildcard imports on packages. Wildcard Import and Non-Public Names Python has a well-established naming convention that allows you to tell the users of your code when a given name in a module is for internal or external use. If an object’s name starts with a single leading underscore, then that name is considered non-public, so it’s for internal use only. In contrast, if a name starts with a lowercase or uppercase letter, then that name is public and, therefore, is part of the module’s public API. Note: In Python, to define identifiers or names, you can use the uppercase and lowercase letters, the underscore (_), and the digits from 0 through 9. Note that you can’t use a digit as the first character in the name. When you have non-public names in a given module, you should know that wildcard imports won’t import those names. Say that you have the following module: Python shapes.py from math import pi as _pi class Circle: def __init__(self, radius): self.radius = _validate(radius) def area(self): return _pi * self.radius**2 class Square: def __init__(self, side): self.side = _validate(side) def area(self): return self.side**2 def _validate(value): if not isinstance(value, int | float) or value <= 0: raise ValueError("positive number expected") return value Copied! In this module, you have two non-public objects _pi and _validate(). You know this because they have a leading underscore in their names. If someone runs a wildcard import on this module, then the non-public names won’t be imported: Python >>> from shapes import * >>> dir() [ 'Circle', 'Square', ... ] Copied! If you take a look at the output of dir(), then you’ll note that only the Circle and Square classes are available in your current namespace. The non-public objects, _pi and _validate(), aren’t available. So, wildcard imports won’t import non-public names. Wildcard Import on Packages Up to this point, you know how wildcard imports work with modules. You can also use this type of import with packages. In that case, the syntax is the same, but you need to use a package name rather than a module name: Python from package import * Copied! Now, what happens when you run this type of import? You may expect that this import causes Python to search the file system, find the modules and subpackages that are present in the package, and import them. However, doing this file system search could take a long time. Additionally, importing modules might have unwanted side effects, because when you import a module, all the executable code in that module runs. Because of these potential issues, Python has the __all__ special variable, which will allow you to explicitly define the list of modules that you want to expose to wildcard import in a given package. You’ll explore the details in the next section. Preparing Your Packages for Wildcard Imports With __all__ Python has two different behaviors when dealing with wildcard imports on packages. Both behaviors depend on whether the __all__ variable is present in the package’s __init__.py file. If __init__.py doesn’t define __all__, then nothing happens when you run a wildcard import on the package. If __init__.py defines __all__, then the objects listed in it will be imported. To illustrate the first behavior, go ahead and create a new folder called shapes/. Inside the folder, create the following files: shapes/ ├── __init__.py ├── circle.py ├── square.py └── utils.py Leave the __init__.py file empty for now. Take the code of your shapes.py file and split it into the rest of the files. Click the collapsible section below to see how to do this: Python shapes/circle.py from math import pi as _pi from shapes.utils import validate class Circle: def __init__(self, radius): self.radius = validate(radius) def area(self): return _pi * self.radius**2 Copied! Python shapes/square.py from shapes.utils import validate class Square: def __init__(self, side): self.side = validate(side) def area(self): return self.side**2 Copied! Python shapes/utils.py def validate(value): if not isinstance(value, int | float) or value <= 0: raise ValueError("positive number expected") return value Copied! In this sample package, the __init__.py file doesn’t define the __all__ variable. So, if you run a wildcard import on this package, then you won’t import any name into your namespace: Python >>> from shapes import * >>> dir() [ '__annotations__', '__builtins__', ... ] Copied! In this example, the dir() function reveals that the wildcard import didn’t bring any name to your current namespace. The circle, square, and utils modules aren’t available in your namespace. If you don’t define __all__ in a package, the statement from package import * doesn’t import all the modules from the target package into the current namespace. In that case, the import statement only ensures that the package was imported and runs any code in __init__.py. If you want to prepare a Python package for wildcard imports, then you need to define the __all__ variable in the package’s __init__.py file. The __all__ variable should be a list of strings containing those names that you want to export from your package when someone uses a wildcard import. Go ahead and add the following line to the file: Python shapes/__init__.py __all__ = ["circle", "square"] Copied! By defining the __all__ variable in the __init__.py file, you establish the module names that a wildcard import will bring into your namespace. In this case, you only want to export the circle and square modules from your package. Now, run the following code in your interactive session: Python >>> from shapes import * >>> dir() [ ... 'circle', 'square' ] Copied! Now, when you run a wildcard import on your shapes package, the circle and square modules become available in your namespace. Note that the utils module isn’t available because you didn’t list it in __all__. It’s up to you as the package author to build this list and keep it up-to-date. Maintaining the list up-to-date is crucial when you release a new version of your package. In this case, it’s also important to note that you’ll get an AttributeError exception if __all__ contains undefined names. Note: When defining __all__, you must be aware that modules might be shadowed by locally defined names. For example, if you added a square() function to the __init__.py file, the function will shadow the square module. Finally, if you define __all__ as an empty list, then nothing will be exported from your package. It’s like not defining __all__ in the package. Exposing Names From Modules and Packages With __all__ You already know that when you run a wildcard import on a module, then you’ll import all the public constants, variables, functions, classes, and other objects in the module. Sometimes, this behavior is okay. However, in some cases, you need to have fine control over what the module exports. You can also use __all__ for this goal. Another interesting use case of __all__ is when you need to export specific names or objects from a package. In this case, you can also use __all__ in a slightly different way. In the following sections, you’ll learn how to use __all__ for controlling what names a module exports and how to export specific names from a package. Names From a Module You can use the __all__ variable to explicitly control what names a module exposes to wildcard imports. In this sense, __all__ allows you to establish a module’s public interface or API. This technique is also a way to explicitly communicate what the module’s API is. If you have a large module with many public names, then you can use __all__ to create a list of exportable names so that wildcard imports don’t pollute the namespace of your code’s users. In general, modules can have a few different types of names: Public names are part of the module’s public interface. Non-public names are for internal use only. Imported names are names that the module imports as public or non-public names. As you already know, public names are those that start with a lowercase or uppercase letter. Non-public names are those that start with a single leading underscore. Finally, imported names are those that you import as public names in a module. These names are also exported from that module. So, that’s why you’ll see imports like the following in many codebases: Python import sys as _sys Copied! In this example, you import the sys module as _sys. The as specifier lets you create an alias for the imported object. In this case, the alias is a non-public name. With this tiny addition to your import statement, you prevent sys from being exported when someone uses a wildcard import on the module. So, if you don’t want to export imported objects from a module, then use the as specifier and a non-public alias for the imported objects. Ideally, the __all__ list should only contain public names that are defined in the containing module. As an example, say that you have the following module containing functions and classes that allow you to make HTTP requests: Python webreader.py import requests __all__ = ["get_page_content", "WebPage"] BASE_URL = "http://example.com" def get_page_content(page): return _fetch_page(page).text def _fetch_page(page): url = f"{BASE_URL}/{page}" return requests.get(url) class WebPage: def __init__(self, page): self.response = _fetch_page(page) def get_content(self): return self.response.text Copied! In this sample module, you import the requests library. Next, you define the __all__ variable. In this example, __all__ includes the get_page_content() function and the WebPage class, which are public names. Note: You need to have the requests library installed on your current Python environment for the example above to work correctly. Note that the helper function _fetch_page() is for internal use only. So, you don’t want to expose it to wildcard imports. Additionally, you don’t want the BASE_URL constant or the imported requests module to be exposed to wildcard imports. Here’s how the module responds to a wildcard import: Python >>> from webreader import * >>> dir() [ 'WebPage', ... 'get_page_content' ] Copied! When you run a wildcard import on the webreader module, only the names listed in __all__ are imported. Now go ahead and comment out the line where you define __all__, restart your REPL session, and run the import again: Python >>> from webreader import * >>> dir() [ 'BASE_URL', 'WebPage', ... 'get_page_content', 'requests' ] Copied! A quick look at the output of dir() shows that now your module exports all the public names, including BASE_URL and even the imported requests library. The __all__ variable lets you have full control over what a module exposes to wildcard imports. However, note that __all__ doesn’t prevent you from importing specific names from a module using an explicit import: Python >>> from webreader import _fetch_page >>> dir() [ ... '_fetch_page' ] Copied! Note that you can use explicit import to bring any name from a given module, even non-public names as _fetch_page() in the example above. Names From a Package In the previous section, you learned how to use __all__ to define which objects are exposed to wildcard imports. Sometimes, you want to do something similar but at the package level. If you want to control the objects and names that a package exposes to wildcard imports, then you can do something like the following in the package’s __init__.py file: Python package/__init__.py from module_0 import name_0, name_1, name_2, name_3 from module_1 import name_4, name_5, name_6 __all__ = [ "name_0", "name_1", "name_2", "name_3", "name_4", "name_5", "name_6", ] Copied! The import statements tell Python to grab the names from each module in the package. Then, in __all__, you list the imported names as strings. This technique is great for those cases where you have a package with many modules, and you want to provide a direct path for imports. As an example of how this technique works in practice, get back to the shapes package and update the __init__.py file as in the code below: Python shapes/__init__.py from shapes.circle import Circle from shapes.square import Square __all__ = ["Circle", "Square"] Copied! In this update, you’ve added two explicit imports to get the Circle and Square classes from their respective module. Then, you add the class names as strings to the __all__ variable. Here’s how the package responds to wildcard imports now: Python >>> from shapes import * >>> dir() [ 'Circle', 'Square', ... ] Copied! Your shapes package exposes the Circle and Square classes to wildcard imports. These classes are what you’ve defined as the public interface of your package. Note how this technique facilitates direct access to names that otherwise you would have to import through qualified names. Exploring Alternative Use Cases of __all__ in Python Besides allowing you to control what your modules and packages expose to wildcard imports, the __all__ variable may serve other purposes. You can use __all__ to iterate over the names and objects that make up the public interface of a package or module. You can also take advantage of __all__ when you need to expose dunder names. Iterating Over a Package’s Interface Because __all__ is typically a list object, you can use it to iterate over the objects that make up a module’s interface. The advantage of using __all__ over dir() is that the package author has explicitly defined which names they consider to be part of the public interface of their package. If you iterate over __all__, you won’t need to filter out non-public names as you’d have to when you iterate over dir(module). For example, say that you have a module with a few similar classes that share the same interface. Here’s a toy example: Python vehicles.py __all__ = ["Car", "Truck"] class Car: def start(self): print("The car is starting") def drive(self): print("The car is driving") def stop(self): print("The car is stopping") class Truck: def start(self): print("The truck is starting") def drive(self): print("The truck is driving") def stop(self): print("The truck is stopping") Copied! In this module, you have two classes that represent vehicles. They share the same interface, so you can use them in similar places. You’ve also defined the __all__ variable, listing the two classes as strings. Now say that you want to use these classes in a loop. How can you do this? You can use __all__ as in the code below: Python >>> import vehicles >>> for v in vehicles.__all__: ... vehicle = getattr(vehicles, v)() ... vehicle.start() ... vehicle.drive() ... vehicle.stop() ... The car is starting The car is driving The car is stopping The truck is starting The truck is driving The truck is stopping Copied! In this example, you first import the vehicles module. Then, you start a for loop over the __all__ variable. Because __all__ is a list of strings, you can use the built-in getattr() function to access the specified objects from vehicles. This way, you’ve iterated over the classes that make up the module’s public API. Accessing Non-Public and Dunder Names When you’re writing modules and packages, sometimes you use module-level names that start and end with double underscores. These names are typically called dunder names. There are a few dunder constants, such as __version__ and __author__, that you may need to expose to wildcard imports. Remember that the default behavior is that these names aren’t imported because they start with a leading underscore. To work around this issue, you can explicitly list these names in your __all__ variable. To illustrate this practice, get back your webreader.py file and update it as in the code below: Python webreader.py import requests __version__ = "1.0.0" __author__ = "Real Python" __all__ = ["get_page_content", "WebPage", "__version__", "__author__"] BASE_URL = "http://example.com" def get_page_content(page): return _fetch_page(page).text # ... Copied! In this update, you define two module-level constants that use dunder names. The first constant provides information about the module’s version, and the second constant holds the author’s name. Here’s how a wildcard import works on this module: Python >>> from webreader import * >>> dir() [ 'WebPage', ... '__author__', ... '__version__', 'get_page_content' ] Copied! Now, when someone uses a wildcard import on the webreader module, they get the dunder variables imported into their namespace. Using __all__ in Python: Benefits and Best Practices Up to this point, you’ve learned a lot about the __all__ variable and how to use it in your code. While you don’t need to use __all__, it gives you complete control over what your packages and modules expose to wildcard imports. The __all__ variable is also a way to communicate to the users of your packages and modules which parts of your code they’re supposed to be using as the public interface. Here’s a quick summary of the main benefits that __all__ can provide: Control over what you expose to wildcard imports: Using __all__ allows you to explicitly specify the public interface of your packages and modules. This practice prevents accidental usage of objects that shouldn’t be used from outside the module. It provides a clear boundary between the module’s internal implementation and its public API. Enhance readability: Using __all__ allows other developers to quickly learn which objects make up the code’s API without examining the entire codebase. This improves code readability and saves time, especially for larger projects with multiple modules. Reduce namespace cluttering: Using __all__ allows you to list the names to be exposed to wildcard imports. This way, you prevent other developers from polluting their namespace with unnecessary or conflicting names. Even though wildcard imports are discouraged in Python, you have no way to control what the users of your code will do while using it. So, using __all__ is a good way to limit wrong uses of your code. Here’s a quick list of best practices for using __all__ in your code: Try to always define __all__ in your packages and modules. This variable provides you with explicit control over what other developers can import with wildcard imports. Take advantage of __all__ as a tool for explicitly defining the public interface of your packages and modules. This practice makes it clear to other developers which objects are intended for external use and which are for internal use only. Keep __all__ focused. The __all__ variables shouldn’t include every object in your module, just the ones that are part of the public API. Use __all__ in conjunction with good documentation. Clear documentation about the intended use and behavior of each object in the public API is the best complement to __all__. Be consistent in using __all__ across all your packages and modules. This practice allows other developers to better understand how to use your code. Regularly review and update __all__. The __all__ variable should always reflect the latest changes in your code’s API. Regularly maintaining __all__ ensures that your code remains clean and usable. Finally, remember that __all__ only affects the wildcard imports. If a user of your code imports a specific object from a package or module, then that object will be imported even if you don’t have it listed in __all__.
8585
dbpedia
1
73
https://tex.stackexchange.com/questions/110501/auto-package-download-for-texlive
en
Auto Package download for TeXLive
https://cdn.sstatic.net/…g?v=eaf26b461720
https://cdn.sstatic.net/…g?v=eaf26b461720
[ "https://cdn.sstatic.net/Sites/tex/Img/logo.svg?v=43890f90cb01", "https://www.gravatar.com/avatar/f7ab1a2cd3140f688ca1a7286f4b3c03?s=64&d=identicon&r=PG", "https://i.sstatic.net/XuZ4a.png?s=64", "https://i.sstatic.net/4r8yp.png?s=64", "https://i.sstatic.net/Y6tnm.png?s=64", "https://tex.stackexchange.com/posts/110501/ivc/3f38?prg=f2e31ef1-892e-4083-af41-8f36b9009332" ]
[]
[]
[ "" ]
null
[]
2013-04-24T14:19:53
I use MiKTeX on Windows and quite satisfied with it. Recently I started switching all my tasks toward open-source alternatives, and in the course I would love to use Linux. In Linux TeXLive is avai...
en
https://cdn.sstatic.net/Sites/tex/Img/favicon.ico?v=91427af8e60a
TeX - LaTeX Stack Exchange
https://tex.stackexchange.com/questions/110501/auto-package-download-for-texlive
While in MiKTeX an installation process is automatically triggered if you have, say, \usepackage{beamer} in a document preamble without the corresponding package installed, there is no such feature on TeX Live. The last statement is not true actually, as pointed out by wasteofspace in the comments there is the texliveonfly package that implements the on demand installation in TeX Live 2010 and later. I never tested it and don't know if it has drawbacks. However, if you install the full (or almost full) TeX Live collection of packages (~2400) you will not need to add new packages, a periodic tlmgr update -all will take care of everything, including the installation of packages added to the TeX Live collection after you first full installation. This feature is explained in the tlmgr manual. Analogously, if a package has been added to a collection on the server that is also installed locally, it will be added to the local installation. This is called auto-install and is announced as such when using the option --list. This auto-installation can be suppressed using the option --no-auto-install The manual has lots of info on useful commands and it is a recommended reading for every user. The downside is of course that you need the full set of packages installed in your machine, which may be a problem if you don't have enough free space. If you really can't spare 2GB from your HD, it is also possible to install TeX Live in a, say, 4GB USB key and live happily ever after :) Everything I just wrote requires that you install TeX Live with one the methods described here. If you decide to use the TeX packages from your distro you are forced to follow their update policy, which is different for different distros texliveonfly As mentioned in comments, there is a TeX Live package called texliveonfly which you can use with texliveonfly filename.tex, and it will automatically downloaded the right TeX Live packages. This also works for packages for which the LaTeX package name and the TeX Live package name don't match (for example the LaTeX rubikrotation package is contained in the rubik TeX Live package), and it also takes package dependencies into account. Usage Installing It is a Python script so it requires Python to be installed. You can then install it like usually with tlmgr install texliveonfly. If you have to use sudo tlmgr here, you will have to use sudo texliveonfly later. Running If you go in your terminal to the directory of your filename.tex file, you can run it with texliveonfly filename.tex. Other compilers At the moment it uses pdflatex by default, but you can configure it to run with other compiler engines by using the --compiler (or -c) flag, so like texliveonfly --compiler=lualatex filename.tex. Compiler flags You can pass flags for the compiler you use to texliveonfly using the --arguments (or -a) flag, so for example if you previously used latexmk -shell-escape -pdf filename.tex then you now use texliveonfly --compiler=latexmk --arguments='-shell-escape -pdf' filename.tex. Known problems There are some cases of missing packages which fail with a non-standard error message, for example babel when it's missing languages, in which case texliveonfly doesn't download them. At the moment the following packages are known to have to be installed manually: (please edit if you find more) Babel languages, for example for european languages install the collection-langeuropean package Biblatex styles, e.g. for the nature style you need the biblatex-nature package fontenc encodings, e.g. to get t2aenc.def you need the cyrillic package, and to get the ly1enc.def you need the ly1 package. Packages involved when using the minted package, which are minted fvextra upquote lineno xstring framed caption (thanks to pablgonz for testing) When running external programs like texcount in your LaTeX file, texliveonfly does not detect that you need the texcount package. When giving options to texliveonfly, for example for a different compiler, it sometimes hangs for no apparent reason when installing packages. You can most probably work around it by first running texliveonfly without options, so texliveonfly main.tex (so it will download the packages) and then running whatever you wanted to, for example latexmk main.tex. Background Essentially texliveonfly is a build tool like latexmk (which is a Perl script), it wraps the TeX engine. Note however that you can chain them with texliveonfly --compiler=latexmk filename.tex. It is a python script which works by trying to run your LaTeX file, and if it fails because a package is missing it will try to install that package. Besides on ctan.org/pkg/texliveonfly you can view the source at ctan.org/tex-archive/support/texliveonfly or on latex.org/forum PS I tested this on Arch Linux 4.19.4 and on Travis CI (Ubuntu 14.04).
8585
dbpedia
3
85
https://nx.dev/reference/project-configuration
en
Project Configuration
https://nx.dev/images/op…onfiguration.jpg
https://nx.dev/images/op…onfiguration.jpg
[]
[]
[]
[ "" ]
null
[]
null
Nx is a build system with built-in tooling and advanced CI capabilities. It helps you maintain and scale monorepos, both locally and on CI.
en
/favicon/favicon.svg
Nx
https://nx.dev/reference/project-configuration
A project's configuration is constructed by Nx from three sources: Tasks inferred by Nx plugins from tooling configuration Workspace targetDefaults defined in the nx.json file Individual project level configuration files (package.json and project.json) Each source will overwrite the previous source. That means targetDefaults will overwrite inferred tasks and project level configuration will overwrite both targetDefaults and inferred tasks. The combined project configuration can be viewed in the project details view by using Nx Console in your IDE or by running: The project details view also shows where each setting is defined so that you know where to change it. Project Level Configuration Files If you need to edit your project settings or modify an inferred task, you can do so in either package.json or project.json files. The examples on this page show both styles, and the only functional difference is that tasks that use executors must be defined in a project.json. Nx merges the two files to get each project's configuration. The full machine readable schema is available on GitHub. The following configuration creates build and test targets for Nx. You can invoke nx build mylib or nx test mylib without any extra configuration. Below are some more complete examples of project configuration files. For a more intuitive understanding of the roles of each option, you can highlight the options in the excerpt below that relate to different categories. Orchestration settings control the way Nx runs tasks. Execution settings control the actual task that is run. Caching settings control when Nx caches a task and what is actually cached. Task Definitions (Targets) A large portion of project configuration is related to defining the tasks for the project. In addition, to defining what the task actually does, a task definition also has properties that define the way that Nx should run that task. Those properties are described in detail below. Cache In Nx 17 and higher, caching is configured by specifying "cache": true in a target's configuration. This will tell Nx that it's ok to cache the results of a given target. For instance, if you have a target that runs tests, you can specify "cache": true in the target default configuration for test and Nx will cache the results of running tests. Per Project Caching + Distribution If you are using distributed task execution and disable caching for a given target, you will not be able to use distributed task execution for that target. This is because distributed task execution requires caching to be enabled. This means that the target you have disabled caching for, and any targets which depend on that target will fail the pipeline if you try to run them with Nx Agents enabled. Parallelism In Nx 19.5.0+, tasks can be configured to support parallelism or not. By default, tasks are run in parallel with other tasks on a given machine. However, in some cases, tasks can require a shared resource such as a port or memory. For these cases, setting "parallelism": false, will ensure that those tasks will not run in parallel with other tasks on a single machine. For example, if the e2e tasks all require port 4200, running them in parallel will conflict so the targets can specify to not support parallelism: Note: Parallelism is only per machine If you are using distributed task execution, tasks will still be run simultaneously on different machines. Because different agents do not share resources with one another, it is perfectly fine for multiple agents to run tasks which do not support parallelism at the same time. Therefore, using Nx Agents is key to running tasks which do not support parallelism quickly and efficiently. Inputs and Named Inputs Each cacheable task needs to define inputs which determine whether the task outputs can be retrieved from the cache or the task needs to be re-run. The namedInputs defined in nx.json or project level configuration are sets of reusable input definitions. A typical set of inputs may look like this: Outputs Targets may define outputs to tell Nx where the target is going to create file artifacts that Nx should cache. "outputs": ["{workspaceRoot}/dist/libs/mylib"] tells Nx where the build target is going to create file artifacts. This configuration is usually not needed. Nx comes with reasonable defaults (imported in nx.json) which implement the configuration above. Specifically, by default, the following locations are cached for builds: {workspaceRoot}/dist/{projectRoot}, {projectRoot}/build, {projectRoot}/dist, {projectRoot}/public Read the configure outputs for task caching recipe for helpful tips for setting outputs. Basic Example Usually, a target writes to a specific directory or a file. The following instructs Nx to cache dist/libs/mylib and build/libs/mylib/main.js: Specifying Globs Sometimes, multiple targets might write to the same directory. When possible it is recommended to direct these targets into separate directories. But if the above is not possible, globs (parsed by the GlobSet Rust library) can be specified as outputs to only cache a set of files rather than the whole directory. More advanced patterns can be used to exclude files and folders in a single line dependsOn Targets can depend on other targets. This is the relevant portion of the configuration file: A common scenario is having to build dependencies of a project first before building the project. This is what the "dependsOn": ["^build"] property of the build target configures. It tells Nx that before it can build mylib it needs to make sure that mylib's dependencies are built as well. This doesn't mean Nx is going to rerun those builds. If the right artifacts are already in the right place, Nx will do nothing. If they aren't in the right place, but they are available in the cache, Nx will retrieve them from the cache. Another common scenario is for a target to depend on another target of the same project. For instance, "dependsOn": ["build"] of the test target tells Nx that before it can test mylib it needs to make sure that mylib is built, which will result in mylib's dependencies being built as well. You can also express task dependencies with an object syntax: Starting from v19.5.0, wildcards can be used to define dependencies in the dependsOn field. Examples You can write the shorthand configuration above in the object syntax like this: With the expanded syntax, you also have a third option available to configure how to handle the params passed to the target. You can either forward them or you can ignore them (default). This also works when defining a relation for the target of the project itself using "projects": "self": Additionally, when using the expanded object syntax, you can specify individual projects in version 16 or greater. This configuration is usually not needed. Nx comes with reasonable defaults (imported in nx.json) which implement the configuration above. Executor/command options To define what a task does, you must configure which command or executor will run when the task is executed. In the case of inferred tasks you can provide project-specific overrides. As an example, if your repo has projects with a build inferred target running the vite build command, you can provide some extra options as follows: For more details on how to pass args to the underlying command see the Pass Args to Commands recipe. In the case of an explicit target using an executor, you can specify the executor and the options specific to that executor as follows: Target Metadata You can add additional metadata to be attached to a target. For example, you can provide a description stating what the target does: Project Metadata The following properties describe the project as a whole. You can annotate your projects with tags as follows: You can configure lint rules using these tags to, for instance, ensure that libraries belonging to myteam are not depended on by libraries belong to theirteam. implicitDependencies Nx uses powerful source-code analysis to figure out your workspace's project graph. Some dependencies cannot be deduced statically, so you can set them manually like this. The implicitDependencies property is parsed with the minimatch library, so you can review that syntax for more advanced use cases. You can also remove a dependency as follows: An implicit dependency could also be a glob pattern: Metadata You can add additional metadata to be attached to the project. For example, you can provide a description for your project: Including package.json files as projects in the graph Any package.json file that is referenced by the workspaces property in the root package.json file will be included as a project in the graph. If you are using Lerna, projects defined in lerna.json will be included. If you are using pnpm, projects defined in pnpm-workspace.yml will be included. If you want to ignore a particular package.json file, exclude it from those tools. For example, you can add !packages/myproject to the workspaces property. Ignoring package.json scripts
8585
dbpedia
2
30
https://opentelemetry.io/docs/kubernetes/operator/automatic/
en
Injecting Auto-instrumentation
https://opentelemetry.io…wordmark-001.png
https://opentelemetry.io…wordmark-001.png
[]
[]
[]
[ "" ]
null
[]
2024-08-05T19:22:09-04:00
An implementation of auto-instrumentation using the OpenTelemetry Operator.
en
/favicons/favicon.ico
OpenTelemetry
https://opentelemetry.io/docs/kubernetes/operator/automatic/
An implementation of auto-instrumentation using the OpenTelemetry Operator. The OpenTelemetry Operator supports injecting and configuring auto-instrumentation libraries for .NET, Java, Node.js, Python, and Go services. Installation First, install the OpenTelemetry Operator into your cluster. You can do this with the Operator release manifest, the Operator helm chart, or with Operator Hub. In most cases, you will need to install cert-manager. If you use the helm chart, there is an option to generate a self-signed cert instead. If you want to use Go auto-instrumentation, you need to enable the feature gate. See Controlling Instrumentation Capabilities for details. Create an OpenTelemetry Collector (Optional) It is a best practice to send telemetry from containers to an OpenTelemetry Collector instead of directly to a backend. The Collector helps simplify secret management, decouples data export problems (such as a need to do retries) from your apps, and lets you add additional data to your telemetry, such as with the k8sattributesprocessor component. If you chose not to use a Collector, you can skip to the next section. The Operator provides a Custom Resource Definition (CRD) for the OpenTelemetry Collector which is used to create an instance of the Collector that the Operator manages. The following example deploys the Collector as a deployment (the default), but there are other deployment modes that can be used. When using the Deployment mode the operator will also create a Service that can be used to interact with the Collector. The name of the service is the name of the OpenTelemetryCollector resource prepended to -collector. For our example that will be demo-collector. The above command results in a deployment of the Collector that you can use as an endpoint for auto-instrumentation in your pods. Configure Automatic Instrumentation To be able to manage automatic instrumentation, the Operator needs to be configured to know what pods to instrument and which automatic instrumentation to use for those pods. This is done via the Instrumentation CRD. Creating the Instrumentation resource correctly is paramount to getting auto-instrumentation working. Making sure all endpoints and env vars are correct is required for auto-instrumentation to work properly. .NET The following command will create a basic Instrumentation resource that is configured specifically for instrumenting .NET services. By default, the Instrumentation resource that auto-instruments .NET services uses otlp with the http/protobuf protocol. This means that the configured endpoint must be able to receive OTLP over http/protobuf. Therefore, the example uses http://demo-collector:4318, which will connect to the http port of the otlpreceiver of the Collector created in the previous step. Excluding auto-instrumentation By default, the .NET auto-instrumentation ships with many instrumentation libraries. This makes instrumentation easy, but could result in too much or unwanted data. If there are any libraries you do not want to use you can set the OTEL_DOTNET_AUTO_[SIGNAL]_[NAME]_INSTRUMENTATION_ENABLED=false where [SIGNAL] is the type of the signal and [NAME] is the case-sensitive name of the library. Learn more For more details, see .NET Auto Instrumentation docs. Java The following command creates a basic Instrumentation resource that is configured for instrumenting Java services. By default, the Instrumentation resource that auto-instruments Java services uses otlp with the http/protobuf protocol. This means that the configured endpoint must be able to receive OTLP over http via protobuf payloads. Therefore, the example uses http://demo-collector:4318, which connects to the http port of the otlpreceiver of the Collector created in the previous step. Excluding auto-instrumentation By default, the Java auto-instrumentation ships with many instrumentation libraries. This makes instrumentation easy, but could result in too much or unwanted data. If there are any libraries you do not want to use you can set the OTEL_INSTRUMENTATION_[NAME]_ENABLED=false where [NAME] is the name of the library. If you know exactly which libraries you want to use, you can disable the default libraries by setting OTEL_INSTRUMENTATION_COMMON_DEFAULT_ENABLED=false and then use OTEL_INSTRUMENTATION_[NAME]_ENABLED=true where [NAME] is the name of the library. For more details, see Suppressing specific instrumentation. Learn more For more details, see Java agent Configuration. Node.js The following command creates a basic Instrumentation resource that is configured for instrumenting Node.js services. By default, the Instrumentation resource that auto-instruments Node.js services uses otlp with the grpc protocol. This means that the configured endpoint must be able to receive OTLP over grpc. Therefore, the example uses http://demo-collector:4317, which connects to the grpc port of the otlpreceiver of the Collector created in the previous step. Excluding instrumentation libraries By default, the Node.js zero-code instrumentation has all the instrumentation libraries enabled. To enable only specific instrumentation libraries you can use the OTEL_NODE_ENABLED_INSTRUMENTATIONS environment variable as documented in the Node.js zero-code instrumentation documentation. To keep all default libraries and disable only specific instrumentation libraries you can use the OTEL_NODE_DISABLED_INSTRUMENTATIONS environment variable. For details, see Excluding instrumentation libraries. Note If both environment variables are set, OTEL_NODE_ENABLED_INSTRUMENTATIONS is applied first, and then OTEL_NODE_DISABLED_INSTRUMENTATIONS is applied to that list. Therefore, if the same instrumentation is included in both lists, that instrumentation will be disabled. Learn more For more details, see Node.js auto-instrumentation. Python The following command will create a basic Instrumentation resource that is configured specifically for instrumenting Python services. By default, the Instrumentation resource that auto-instruments Python services uses otlp with the http/protobuf protocol (gRPC is not supported at this time). This means that the configured endpoint must be able to receive OTLP over http/protobuf. Therefore, the example uses http://demo-collector:4318, which will connect to the http port of the otlpreceiver of the Collector created in the previous step. As of operator v0.67.0, the Instrumentation resource automatically sets OTEL_EXPORTER_OTLP_TRACES_PROTOCOL and OTEL_EXPORTER_OTLP_METRICS_PROTOCOL to http/protobuf for Python services. If you use an older version of the Operator you MUST set these env variables to http/protobuf, or Python auto-instrumentation will not work. Auto-instrumenting Python logs By default, Python logs auto-instrumentation is disabled. If you would like to enable this feature, you must to set the OTEL_LOGS_EXPORTER and OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED environment variables as follows: Note that OTEL_LOGS_EXPORTER must be explicitly set to otlp_proto_http, otherwise it defaults to gRPC. Excluding auto-instrumentation By default, the Python auto-instrumentation ships with many instrumentation libraries. This makes instrumentation easy, but can result in too much or unwanted data. If there are any packages you do not want to instrument, you can set the OTEL_PYTHON_DISABLED_INSTRUMENTATIONS environment variable. Learn more See the Python agent Configuration docs for more details. Go The following command creates a basic Instrumentation resource that is configured specifically for instrumenting Go services. By default, the Instrumentation resource that auto-instruments Go services uses otlp with the http/protobuf protocol. This means that the configured endpoint must be able to receive OTLP over http/protobuf. Therefore, the example uses http://demo-collector:4318, which connects to the http/protobuf port of the otlpreceiver of the Collector created in the previous step. The Go auto-instrumentation does not support disabling any instrumentation. See the Go Auto-Instrumentation repository for more details. Now that your Instrumentation object is created, your cluster has the ability to auto-instrument services and send data to an endpoint. However, auto-instrumentation with the OpenTelemetry Operator follows an opt-in model. In order to activate automatic instrumentation, you’ll need to add an annotation to your deployment. Add annotations to existing deployments The final step is to opt in your services to automatic instrumentation. This is done by updating your service’s spec.template.metadata.annotations to include a language-specific annotation: .NET: instrumentation.opentelemetry.io/inject-dotnet: "true" Go: instrumentation.opentelemetry.io/inject-go: "true" Java: instrumentation.opentelemetry.io/inject-java: "true" Node.js: instrumentation.opentelemetry.io/inject-nodejs: "true" Python: instrumentation.opentelemetry.io/inject-python: "true" The possible values for the annotation can be "true" - to inject Instrumentation resource with default name from the current namespace. "my-instrumentation" - to inject Instrumentation CR instance with name "my-instrumentation" in the current namespace. "my-other-namespace/my-instrumentation" - to inject Instrumentation CR instance with name "my-instrumentation" from another namespace "my-other-namespace". "false" - do not inject Alternatively, the annotation can be added to a namespace, which will result in all services in that namespace to opt-in to automatic instrumentation. See the Operators auto-instrumentation documentation for more details. Opt-in a Go Service Unlike other languages’ auto-instrumentation, Go works via an eBPF agent running via a sidecar. When opted in, the Operator will inject this sidecar into your pod. In addition to the instrumentation.opentelemetry.io/inject-go annotation mentioned above, you must also supply a value for the OTEL_GO_AUTO_TARGET_EXE environment variable. You can set this environment variable via the instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation. This environment variable can also be set via the Instrumentation resource, with the annotation taking precedence. Since Go auto-instrumentation requires OTEL_GO_AUTO_TARGET_EXE to be set, you must supply a valid executable path via the annotation or the Instrumentation resource. Failure to set this value causes instrumentation injection to abort, leaving the original pod unchanged. Since Go auto-instrumentation uses eBPF, it also requires elevated permissions. When you opt in, the sidecar the Operator injects will require the following permissions: Troubleshooting If you run into problems trying to auto-instrument your code, here are a few things that you can try. Did the Instrumentation resource install? After installing the Instrumentation resource, verify that it installed correctly by running this command, where <namespace> is the namespace in which the Instrumentation resource is deployed: Sample output: Do the OTel Operator logs show any auto-instrumentation errors? Check the OTel Operator logs for any errors pertaining to auto-instrumentation by running this command: Were the resources deployed in the right order? Order matters! The Instrumentation resource needs to be deployed before deploying the application, otherwise the auto-instrumentation won’t work. Recall the auto-instrumentation annotation: The annotation above tells the OTel Operator to look for an Instrumentation object in the pod’s namespace. It also tells the Operator to inject Python auto-instrumentation into the pod. When the pod starts up, the annotation tells the Operator to look for an Instrumentation object in the pod’s namespace, and to inject auto-instrumentation into the pod. It adds an init-container to the application’s pod, called opentelemetry-auto-instrumentation, which is then used to injects the auto-instrumentation into the app container. If the Instrumentation resource isn’t present by the time the application is deployed, however, the init-container can’t be created. Therefore, if the application is deployed before deploying the Instrumentation resource, the auto-instrumentation will fail. To make sure that the opentelemetry-auto-instrumentation init-container has started up correctly (or has even started up at all), run the following command: Which should output something like this: If the output is missing Created and/or Started entries for opentelemetry-auto-instrumentation, then it means that there is an issue with your auto-instrumentation. This can be the result of any of the following: The Instrumentation resource wasn’t installed (or wasn’t installed properly). The Instrumentation resource was installed after the application was deployed. There’s an error in the auto-instrumentation annotation, or the annotation in the wrong spot — see #4 below. Be sure to check the output of kubectl get events for any errors, as these might help point to the issue. Is the auto-instrumentation annotation correct? Sometimes auto-instrumentation can fail due to errors in the auto-instrumentation annotation. Here are a few things to check for: Is the auto-instrumentation for the right language? For example, when instrumenting a Python application, make sure that the annotation doesn’t incorrectly say instrumentation.opentelemetry.io/inject-java: "true" instead. Is the auto-instrumentation annotation in the correct location? When defining a Deployment, annotations can be added in one of two locations: spec.metadata.annotations, and spec.template.metadata.annotations. The auto-instrumentation annotation needs to be added to spec.template.metadata.annotations, otherwise it won’t work. Was the auto-instrumentation endpoint configured correctly? The spec.exporter.endpoint attribute of the Instrumentation resource defines where to send data to. This can be an OTel Collector, or any OTLP endpoint. If this attribute is left out, it defaults to http://localhost:4317, which, most likely won’t send telemetry data anywhere. When sending telemetry to an OTel Collector located in the same Kubernetes cluster, spec.exporter.endpoint should reference the name of the OTel Collector Service. For example: Here, the Collector endpoint is set to http://demo-collector.opentelemetry.svc.cluster.local:4317, where demo-collector is the name of the OTel Collector Kubernetes Service. In the above example, the Collector is running in a different namespace from the application, which means that opentelemetry.svc.cluster.local must be appended to the Collector’s service name, where opentelemetry is the namespace in which the Collector resides.
8585
dbpedia
0
29
https://timothybramlett.com/How_to_create_a_Python_Package_with___init__py.html
en
How to create a Python Package with __init__.py
https://timothybramlett-…ticle-banner.jpg
[ "https://timothybramlett-com-public.s3.us-east-1.amazonaws.com/init-article-banner.jpg" ]
[]
[]
[ "Python" ]
null
[ "Timothy Bramlett" ]
2016-09-23T00:00:00
How to create a Python Package with init.py
en
Timothy Bramlett
https://timothybramlett.com/How_to_create_a_Python_Package_with___init__py.html
What is a Python package? A Python package is simply an organized collection of python modules. A python module is simply a single python file. Why would I want to create a package using __init__.py? Creating a package with __init__.py is all about making it easier to develop larger Python projects. It provides a mechanism for you to group separate python scripts into a single importable module. Let's run through some examples The best way to understand why you would use __init__.pyand to learn how to use it to create a package is to run through some quick examples! The best way to learn is by doing! The code in this tutorial should work for Python 2 or 3. Just remember, if you are using 2 then you will need to use the from __future__ import print_function functionality. Say we have three modules we have created: Remember a module is just another name for any single python file For our example, the content of these files is the following: Obviously, these functions are useless, but it helps to serve as a model for the basic concept that we have some python modules that we have already written that are somehow related. So, without creating a package and using __init__.py, how do we use the functions in these files? Well, we can only import these files if they are in the current directory that whatever script we are running is running from. Well, we can use these files in a new Python script but with one key caveat: The files must be in the same directory as the script we are trying to use them in. To illustrate that, let's create a file called example1.py that leverages our modules: Adding a blank __init__.py What if we wanted to seperate these scripts into a folder in order to keep them more organized? Well, that is where the __init__.py file comes into play. First, lets move our scripts into a new subfolder and call it: string_func. Then create an empty file in that folder called __init__.py Here is our new file/folder structure: So, now let's test out exactly what __init__.py allows us to do: Let's make a new example2.py file. So, now we can access our string functions in this manner. This is great, because they are all in a seperate folder, but the syntax is definitely not very succinct. Let's see if we can clean things up a bit by editing our __init__.py file. Adding imports to init.py Open your __init__.py file and make the following changes: Note that the . before the module name is neccessary as of Python 3 since it is more strict regarding relative imports: https://stackoverflow.com/questions/12172791/changes-in-import-statement-python3?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa And so with that in our __init__.py we can now shorten our code to: Now the syntax is a lot shorter and you can see that string_func is behaving like its own module. So, that is basically what __init__.py does! It allows you to treat a directory as if it was a python module. Then you can further define imports inside your __init__.py file to make imports more succinct, or you can just leave the file blank. Debugging import Issues There are basically 3 tips I have for debugging import issues: Use the interactive interpreter (The REPL) to import the modules and see if you are getting what you expect. Start your script with python -v -m my_scriptname.py and then check the output to see exactly where your modules are getting imported from. Use Pycharm. Pycharm's fantastic introspection abilities mean that you will immeadiately know whether or not your module is being properly imported as it will indicate an error if not. It will sometimes also suggest the proper correction. The community edition is free and if you're a student you can get a free subscription to ALL of their products! For more information about python modules and packages you can see check the python documentation on it. You can also check out this great Talk Python To Me podcast with David Beazley where he discusses the subject, as well as David's talk on the same subject.
8585
dbpedia
1
65
https://www.autopack.org/install/python-molecular-viewer-pmv-installation
en
Python Molecular Viewer (PMV) Installation
https://lh4.googleusercontent.com/cj2sl2hUoSyExbG9_YEzq4gE79HirhwCqwmSA2lMyZqJ_WZjkSlKKHZONOsFfCqdd4F3CMvOZ96K_kSzgVITQBE=w16383
https://lh4.googleusercontent.com/cj2sl2hUoSyExbG9_YEzq4gE79HirhwCqwmSA2lMyZqJ_WZjkSlKKHZONOsFfCqdd4F3CMvOZ96K_kSzgVITQBE=w16383
[ "https://lh4.googleusercontent.com/cj2sl2hUoSyExbG9_YEzq4gE79HirhwCqwmSA2lMyZqJ_WZjkSlKKHZONOsFfCqdd4F3CMvOZ96K_kSzgVITQBE=w16383", "https://lh4.googleusercontent.com/cj2sl2hUoSyExbG9_YEzq4gE79HirhwCqwmSA2lMyZqJ_WZjkSlKKHZONOsFfCqdd4F3CMvOZ96K_kSzgVITQBE=w16383", "https://lh3.googleusercontent.com/yFd_bMW4CjC2ueXIdADLiwfUpme4CNOCvEldd8iGiGjpMiPfC9Mnr_0zFE_YhcVztk3egK9o7Y-aMLKz6-rz3HmmzGbKXt29POzTawMi1ZzlK0ih=w1280", "https://lh5.googleusercontent.com/itvtqIj-YmLgwsz9UWy-OQcDfvHMmM2ZqNa46Uly0jVk0Hm1SlV94QqG4Np-fDmDQdfV2ulq00ywhUcoydCMyyEwbpDKaLv98roPO7xQmShuKP4q=w1280", "https://lh3.googleusercontent.com/asVwmB68kiNLpGG6oo_p6tSO4lGiKvRLfeQRohhB6IENMWwKqvfiQNiHS8BB7aDoQ5-ANa_DEfFePB9GNdEc8yOMStdCEi6qhsgZI5tRSmqsb843=w1280", "https://lh5.googleusercontent.com/n_qdsPGnasumvCYpnuJsjRqteirjxXAr-1xIr9tfaj36yKTZrnQNmJpvQ6f9VBroKgKTDyaar2vq_GSebucFb3dpcQrfblrwx43IScGbg3yxB18g=w1280", "https://lh6.googleusercontent.com/_Qdo2EelCDkmNlrRX1gAI-nwJprySkcAN4Z1ZJOgplQjrx7VV2YXByZQ9s184X29MxHb_6d3AD0GmnPcS5LDhJQ_EoLnBr77Vufq-1DtQgcBgF1k=w1280", "https://lh4.googleusercontent.com/zPMnrQ79JFeL-StQoJ41vOYYhu2xtaA5S8ZOn4OufWOMNvJoaHTNBqeWehNG0OZ-8MGBZ5amg-nLxJShq7pEi-cOw57B2mYqam5gxDBuIRFhd9mZ=w1280", "https://lh6.googleusercontent.com/2jaKk1WkbZj8UF-bdHvpRrGhW4XxNPo0SxY3HoQIYJ5CCu0aRLFKjlIFetucv7OAFB4C-NfWd_j64V4xiQMEUqpECAwroH6KCrx80MxF2BVZAM9k=w1280", "https://lh6.googleusercontent.com/zf7kMHW__rCnqE8OCotbjWUBQArm2FTzISSctIvYsmdgeK7WKtV9DwfWLhNfDMGotpMrnQ=w1280", "https://lh5.googleusercontent.com/PjgRhn_KBp-KdnAyEmJqN2Ad91jJo6GebPIAg4Tza5ez0RrkxafRLhVWGIInsk7QuqZjNskoxE2gjaN8am_Zw8MjiZpzFRXQhuEQl5DS3B5gFPuT=w1280", "https://lh6.googleusercontent.com/aef7aY1ej157r3AQtBnUV2gx2IA0IBjpBOfy9EDd7uhCbE4dYV8O1L-7n2cFKnmWAFSXNrQVAeU1TP4LTnM6WDxOvw2sVAPOHKYHDaX_LRdDgrXY=w1280", "https://lh5.googleusercontent.com/PyRoIQqxqkJV146UaCEFqvkFmazBQ5ZGPI8Pbo34qN3H0id4rhxV4dEvJly67_2Tka1VAA=w1280", "https://lh5.googleusercontent.com/vyzdO_C0efJb1C0wEvE1DTxnri41i7JUbRUVVGsSrlFWRQrDaoCer1pk8xDQSwUH3glEZ2lYFuU3CHyL182f4PoWOg83PefaJdfTN6U7DckLP1H8=w1280", "https://lh6.googleusercontent.com/ZZ4dvO2uXpYS5JFzisxVN6K3Bor0eq2i-JhrmOugq47JwFXxPdp50sYjI01pXaNZNvgGWnfC-TXqHtAd1UTEZjsH7_gL6KM8qQXJIJqFeuc8T2L_=w1280", "https://lh5.googleusercontent.com/7u-FD8HYkdOe-7zD7c92fHNekLSLxt7-bU78kYVpFRnLiGHVqG6TjAvjko9gNBfrejt20TskmX32eADOZRpcfsk1OHp4SYc7EF0sTR54cQ4AW569=w1280" ]
[]
[]
[ "" ]
null
[]
null
autoPACK: INSTALLATION for the Python Molecular Viewer (PMV) autoPACK for PMV is a very early alpha version released only for MAC as of March 14, 2013. Please check back frequently for updates. Overview This installation uses a preview nightly build of PMV and requires the installation of two
en
https://lh4.googleusercontent.com/zBAxrAPszETIHVxpbt0BDLSd2Twd8B7m8xHR0rfArFSUE8YI12ezdnFluJfONNQtt4yrYMnGD_kuH70KOMRPc_cru3PX
https://www.autopack.org/install/python-molecular-viewer-pmv-installation
autoPACK: INSTALLATION for the Python Molecular Viewer (PMV) autoPACK for PMV is a very early alpha version released only for MAC as of March 14, 2013. Please check back frequently for updates. Overview This installation uses a preview nightly build of PMV and requires the installation of two additional modules to the package. PMV is a powerful molecular viewer that has a number of customizable features and comes with many pluggable commands ranging from displaying molecular surfaces to advanced volume rendering. Pre-requisites Windows, Mac, or Linux Respect the license that you will agree to during installation.
8585
dbpedia
0
91
https://dev.gajim.org/wannestas/gajim/-/blob/plugin-system/autopackage/default.apspec%3Fref_type%3Dheads
en
autopackage/default.apspec · plugin
https://dev.gajim.org/up….gajim.Gajim.png
https://dev.gajim.org/up….gajim.Gajim.png
[ "https://dev.gajim.org/assets/no_avatar-849f9c04a3a0d0cea2424ae97b27447dc64a7dbfae83c036c45b403392f0e8ba.png" ]
[]
[]
[ "" ]
null
[]
2008-08-12T16:12:25+00:00
Gajim XMPP client
en
/assets/favicon-72a2cad5025aa931d6ea56c3201d1f18e68a8cd39788c7c80d5b2b82aa5143ef.png
GitLab
https://dev.gajim.org/wannestas/gajim/-/blob/plugin-system/autopackage/default.apspec?ref_type=heads
8585
dbpedia
2
88
https://www.systemcenterdudes.com/best-sccm-community-tools-2023-edition/
en
Our list of Intune Community Tools – 2023 Edition
https://www.systemcenter…/10/image-21.png
https://www.systemcenter…/10/image-21.png
[ "https://www.systemcenterdudes.com/wp-content/uploads/2023/07/cropped-logo.png", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/zoom.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-21.png", "https://secure.gravatar.com/avatar/82b054140ecfd03aed6c3b48c8d5be72?s=96&d=retro&r=g", "https://www.systemcenterdudes.com/wp-content/uploads/2024/06/PatchmyPC-1.jpg", "https://www.systemcenterdudes.com/wp-content/uploads/2024/02/image-47.png", "https://www.systemcenterdudes.com/wp-content/uploads/2024/02/Patch20My20PC20-20Logo20Horizontal20-20Blue20300ppi-1.png", "https://www.systemcenterdudes.com/wp-content/uploads/2016/10/MVP_Logo_Horizontal_Preferred_Cyan300_RGB_300ppi-e1500695113889.png", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/in.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/fb.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/tw.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-21.png", "https://www.systemcenterdudes.com/wp-content/uploads/2024/06/SCD-Banner-Consulting-Services-3.gif", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-21.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-27.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-24.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-22.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/11/image.png", "https://systemcenterdudes.com/wp-content/uploads/2021/11/image-7.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-26.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-23.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-25.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/10/image-28.png", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/in.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/fb.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/tw.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2024/02/image-47.png", "https://www.systemcenterdudes.com/wp-content/uploads/2024/02/Patch20My20PC20-20Logo20Horizontal20-20Blue20300ppi-1.png", "https://www.systemcenterdudes.com/wp-content/uploads/2016/10/MVP_Logo_Horizontal_Preferred_Cyan300_RGB_300ppi-e1500695113889.png", "https://secure.gravatar.com/avatar/35f639a4c0ead9bfc7cb36abadf0d00c?s=96&d=retro&r=g", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/cross.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2024/08/2110098.jpg", "https://www.systemcenterdudes.com/wp-content/uploads/2022/07/importautopilot.gif", "https://www.systemcenterdudes.com/wp-content/uploads/2024/07/image-9.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/07/cropped-logo.png", "https://www.systemcenterdudes.com/wp-content/uploads/2023/07/in.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2023/07/fb.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2023/09/tw.svg", "https://www.systemcenterdudes.com/wp-content/uploads/2023/07/yt.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/cross.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/cross.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/cross.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/sent.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/cross.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/sent.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/cross.svg", "https://www.systemcenterdudes.com/wp-content/themes/scd/assets/images/fail.svg", "https://www.facebook.com/tr?id=1022814134754498&ev=PageView&noscript=1" ]
[]
[]
[ "" ]
null
[ "Benoit Lecours", "www.facebook.com" ]
2023-10-25T19:23:55+00:00
I decided to do a first edition of the Intune Community Tools since Intune has evolved a lot in the last 4 years.
en
https://www.systemcenter…logo-166x150.png
System Center Dudes
https://www.systemcenterdudes.com/best-sccm-community-tools-2023-edition/
Based on the popularity of my Best SCCM Community Tools post published in 2018, 2019 and 2020, and 2021. I decided to do a first edition of the Intune Community Tools since Intune has evolved a lot in the last 4 years. To the list, I added YouTube channels that you could have missed. The Intune Community tools are listed in no particular order. This list could have been longer but I needed to choose from my top personal list. If you feel that I’ve forgotten your awesome contribution to the Intune community, please use the comment section and it will be a pleasure to promote it. List of Intune Community Tools Intune Training If you’re looking for a place to start and learn about Intune, this is the YouTube channel to visit. There are tons of useful videos by one of our team members Adam Gross and it’s friend Steve Hosking. Last month, they started a 2023 reboot to update all of their videos. They are releasing a new video regularly so be sure to check the channel on a regular basis. Youtube : Intune Training Endpoint Analytics Remediation Scripts Proactive remediations are PowerShell scripts that can detect and fix issues/settings/configurations on Intune (Endpoint) managed devices. This repository is a community project with many ready-to-use endpoint analytics remediation scripts. If you’re into remediation scripts, this is the repository to look for. It can save you hours of PowerShell coding. Chances are that another community member has already achieved what you’re looking to do. Product page: Endpoint Analytics Remediation Scripts Intune Offboarding Tool This PowerShell script provides a WPF GUI-based tool that facilitates the offboarding of devices from Microsoft’s Intune, AutoPilot, and Azure AD services. The tool leverages Microsoft Graph APIs to authenticate, search, and remove devices. Product Page : Intune Offboarding Tool Author : Ugur Koc OSDCloud OSDCloud is a solution for deploying Windows 10/11 x64 over the internet using the OSD PowerShell Module. There is also a Sandbox to test OSDCloud without using a dedicated OSDCloud WinPE. This solution is a must for organizations managing many employees who are working from home. Product Page: OSDCloud Author : David Segura IntuneManagement These PowerShell scripts use Microsoft Authentication Library (MSAL), Microsoft Graph APIs and Azure Management APIs to manage objects in Intune and Azure. The script has a simple WPF UI and it supports operations like Export, Import, Copy, Download, Compare, etc. This makes it easy to back up or clone a complete Intune environment. The scripts can export and import objects including assignments and support import/export between tenants. This script also has an extension that can document profiles and policies in Intune which can be very useful. Github : IntuneManagement Author : Micke Karlsson IntuneWin32App This module was created to provide means to automate the packaging, creation, and publishing of Win32 applications in Microsoft Intune. It provides a set of functions to manage all aspects of Win32 apps in Microsoft Endpoint Manager (Intune). This can be very useful to automate or ease the creation of many apps. Github : IntuneWin32App Author : Nickolaj Andersen from MSEndpointMgr Intune Uploader Simplify the process of creating and updating apps and other payloads in Intune by leveraging the power of AutoPkg. With AutoPkg, you can automate the process of downloading, packaging and uploading apps to Intune, saving you time and effort. Product Page : intune-uploader Author : Tobias Almen Intune Device Details GUI This Windows Powershell-based GUI/report helps Intune admins to see Intune device data in one visual view. Especially it shows what Azure AD Groups and Intune filters are used in Application and Configuration Assignments. Assignment group information helps admins understand why apps and configurations are targeted to devices and find possible bad assignments. Product Page : Intune Device Details GUI Author : Petri Paavola Intune Community Tools – IntuneCD Intune Continuous Delivery as it stands for is a Python package that is used to back up and update configurations in Intune. The main function is to back up configurations from Intune to a Git repository from a DEV environment and if any configurations have changed, push them to PROD Intune environment. Product page : IntuneCD tool Author : Tobias Almen Win32AppMigrationTool Win32AppMigrationTool is designed to export the Application and Deployment Data from ConfigMgr to firstly create a .intunewin file and secondly publish the Win32App to Intune. Author: Ben Whitmore from PatchMyPC Product page: Win32AppMigrationTool SyncMLViewer This tool is able to present the SyncML protocol stream between the Windows 10 client and management system. In addition, it does some extra parsing to extract details and make the analysis a bit easier. The tool can be very handy to troubleshoot policy issues. Tracing what the client actually sends and receives provides deep protocol insights. Verifying OMA-URIs and data field definitions. It makes it easy to get confirmation about queried or applied settings. Product Page : SyncMLViewer Author : Oliver Kieselbach Intune Community Tools – DCToolbox A PowerShell toolbox for Microsoft 365 security fans. This can be very useful to automate or ease security parameters in Intune. Github : DCToolbox Author : Daniel Chronlund Get-IntuneManagementExtensionDiagnostics This script analyzes Microsoft Intune Management Extension (IME) log(s) and creates a timeline report from found events. For Win32App delivery it also shows a summary of download statistics with estimated network bandwidth and Delivery Optimization statistics. Product page: Get-IntuneManagementExtensionDiagnostics Author : Petri Paavola Intune Debug Toolkit Intune Debug Toolkit is a community-developed solution to troubleshoot devices co-managed or Intune-managed only. Product page: Intune Debug Toolkit – MSEndpointMgr Many authors: Oliver Kieselbach Rudy Ooms David Just Jannik Reinhard Ondrej Šebela David Segura Petri Paavola Intune network drive mapping generator This tool generates Intune PowerShell scripts to map network drives on Azure AD joined devices. It can be useful for organizations that need to support network map drives. Product page : Intune network drive mapping generator Author : Nicola Suter Intune Community Tools – KQL Cheat Cheat If you’re into KQL (Kusto) learning, this cheat sheet is a must to understand the basics of the language. Product page : KQL Cheat Sheet Autor : Markus Bakker We hope this list of Intune Community Tools was helpful. Thanks to all the contributors who helped the Intune community with their tools, blog posts, and time.
8585
dbpedia
1
49
https://www.kali.org/docs/development/advanced-packaging-example/
en
Advanced Packaging Step-By-Step Example (FinalRecon & Python-icmplib)
https://www.kali.org/ima…es/kali-logo.svg
https://www.kali.org/ima…es/kali-logo.svg
[]
[]
[]
[ "kali", "linux", "kalilinux", "Penetration", "Testing", "Penetration Testing", "Distribution", "Advanced" ]
null
[]
2023-06-06T00:00:00+00:00
This guide is accurate at the time of writing. As it references a lot of external resources out of our control, items may be different over time (as software gets updated). FinalRecon is a Python 3 application with multiple Python dependencies. At the time of writing, one of the dependencies (python3-icmplib) is not in the Kali Linux repository.
en
https://www.kali.org/images/favicon.png
Kali Linux
https://www.kali.org/docs/development/advanced-packaging-example/
This guide is accurate at the time of writing. As it references a lot of external resources out of our control, items may be different over time (as software gets updated). FinalRecon is a Python 3 application with multiple Python dependencies. At the time of writing, one of the dependencies (python3-icmplib) is not in the Kali Linux repository. In this guide we will have to learn how to follow dependency chains, and fix anything required to ensure that the end package can be included. We will also create a patch, helper-script, as well as a runtime test for the package. We will assume we have already followed our documentation on setting up a packaging environment as well as our previous other packaging guides #1 (Instaloader) & #2 (Photon) as this will explain their contents. FinalRecon Code Overview The first action we will take, will be to look at FinalRecon’s source code to see what information we can acquire. Using this, we notice the following: It has no tag release The MIT license file There is no setup.py file (which is used for setuptools) There is a requirements.txt file (which is used for pip) Various descriptions about the tool & usage guides Various external links (if any additional research is required) Missing Tag Releases As FinalRecon does not have a tag release we will have to create our own upstream tar file. Looking to see what branches there are, we discover there is just one (there isn’t a stable/production one, or is there a beta/deployment/staging one). As a result, we will use whatever is the latest commit on the main branch until the author does a tag release. We can auto open up an issue request and/or email them seeing if they will response to such an act. Note that having a “tagged” release is preferred when doing Debian packaging. End users often want something that is “stable”, and which has been fully tested. It’s also easier for the distribution to know when to update the package: we just wait for upstream to release something, which is a clear signal that the code is ready to be used. So when it’s possible, we favor packaging a tagged release over the latest Git commit. License This package has been detected as having a MIT license by GitHub. If we look at the specific license file we can see that there is not a lot to copy, so we will be copying this exactly as-is. Unfortunately, though we have found a maintainer we have not found any contact information yet. We will have to continue searching for contact information. Dependencies As there is a requirements.txt file (which is used for Python’s pip to install any Python dependencies that are required for this tool to work), we will need check to see what’s needed. Description(s) We will once again pull our description from FinalRecon’s GitHub. For the short description we will use a modified version of the first line in the README, “A fast and simple python script for web reconnaissance.” For the long description we will also use a modified version of the first line in the README, however we will expand this time, “A fast and simple python script for web reconnaissance that follows a modular structure and provides detailed information on various areas.” Maintainer(s) If we were to look all over the GitHub we would not find an email address. We could look in git log and view the email addresses associated, however these do not seem to be solid as there are multiple for “thewhiteh4t” (at least 3). Instead, we do more digging. We notice that there is a YouTube video demo linked in the README.md, if we go to the YouTube channel’s about page we can view an email address for business inquiries (which does not match to any in the git log). This will be a good choice to use as the contact information. With that said all said, not having a contact information is not an essential part , so if we were unable to find one we could still continue to package. Setting Up The Environment We will assume that we have already followed our documentation on setting up a packing environment. Let’s set up our directories now for this package: kali@kali:~$ mkdir -p ~/kali/packages/finalrecon/ ~/kali/upstream/ kali@kali:~$ Downloading Git Snapshot We’re going to download an archive of the upstream source code. Since upstream didn’t tag any release yet, we’ll package the latest Git commit on the main branch. There are different (many?) ways to do that, and in this example we will use uscan for the task. uscan is able to download a Git repository, pack it into a .tar.gz archive, and come up with a meaningful (and somewhat standard) version string. This last point is important: a Debian package must have a version, however a Git commit doesn’t have a version per se. So we need to associate a version with a Git commit, and there are many ways to get that wrong. So rather than deciding by ourselves what the package version should be, we’ll let the tooling (uscan in this case) do that for us. In order to use uscan, we need a watch file. This file is usually part of the packaging files, and located in debian/watch. Let’s start by entering the working directory, and then create the debian dir: kali@kali:~$ cd ~/kali/packages/finalrecon/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ mkdir debian kali@kali:~/kali/packages/finalrecon$ And now let’s create the watch file. The purpose of the watch file is to provide instructions to find the latest upstream release online. In this particular case though, upstream didn’t provide any tagged release yet, so we’ll configure the watch file to track the latest Git commit on the main branch: kali@kali:~/kali/packages/finalrecon$ vim debian/watch kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/watch version=4 opts="mode=git, pgpmode=none" \ https://github.com/thewhiteh4t/FinalRecon HEAD kali@kali:~/kali/packages/finalrecon$ At this point, we have enough to run uscan to download and pack the latest Git commit from upstream: kali@kali:~/kali/packages/finalrecon$ uscan --destdir ~/kali/upstream/ --force-download \ --package finalrecon --upstream-version 0~0 --watchfile debian/watch uscan: Newest version of finalrecon on remote site is 0.0~git20201107.0d41eb6, local version is 0~0 uscan: => Newer package available from: => https://github.com/thewhiteh4t/FinalRecon HEAD uscan warn: Missing debian/source/format, switch compression to gzip Successfully repacked ~/kali/upstream/finalrecon-0.0~git20201107.0d41eb6.tar.xz as ~/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz. This command warrants some explanations. Since at this point we run uscan from an almost empty directory, we need to be explicit about what we want to do. In particular: --watchfile tells uscan where is the watch file that we want it to use. --package is used to give the package name. --upstream-version is actually the “current upstream version”. In general, uscan works by comparing the latest version found online with the version that is currently packaged, and it downloads the latest upstream version only if it’s newer than the current version. However here there’s no “current version” since we’re creating a new package, so we tell uscan that the current version is 0~0, ie. the lowest version possible, so that whatever version found online is deemed higher than that. --destdir tells uscan where to save the download files. --force-download overrides uscan’s guess of what it should do: we want it to download the latest upstream version. To be sure, we can have a look in the ~/kali/upstream directory to check what files landed there: kali@kali:~/kali/packages/finalrecon$ ls ~/kali/upstream finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz finalrecon-0.0~git20201107.0d41eb6.tar.xz uscan packed the code from Git in a .tar.xz file, and for some reason (see the line starting with uscan warn: above), it repacked in as a .tar.gz. We don’t really care about the compression, we’re fine with both .gz and .xz. What matters is that we’ll use the file which name ends with .orig.tar.*, so we’re going to use the .tar.gz. uscan came up with a funny-looking (and rather complicated) version string: 0.0~git20201107.0d41eb6. Why is that? 0.0~ is the lowest starting point for a version string. It’s handy to start from there, so that whenever upstream does a “tagged release”, whatever they choose, it will be greater than our version. So we’ll be able to use it for the package version “as is”. git is informative, and it obviously refers to the VCS used by upstream (examples of other VCS: svn or bzr). 20201107 is the date (YYYYMMDD aka. ISO-8601 format) of the upstream commit that we package. Having the date part of the version string is needed so that whenever we’ll want to import a new Git snapshot, the date will be newer, and the new version string will be sorted above by the package manager (version strings must ALWAYS go ascending). 0d41eb6 is the Git commit. It’s informative, and it’s a non-ambiguous way to know exactly what upstream code is included in the package. Without it, a developer who wants to know what Git commit was packaged would rely on the date, and if there’s more than one commit on this date, it wouldn’t be clear what commit exactly was packaged. Additionally, this is an UTC date, while usual tools or web browser usuall show dates in local time: another source of error for those who rely on the date only. So having the Git commit part of the version string is really useful for developers (maybe not so much for users). Alright, we hope that you appreciated this overwhelming amount of information. Let’s move on and keep working on the package. Creating Package Source Code We are now going to create a new empty Git repository: kali@kali:~/kali/packages/finalrecon$ git init Initialized empty Git repository in /home/kali/kali/packages/finalrecon/.git/ kali@kali:~/kali/packages/finalrecon$ We can now import the .tar.gz we previously downloaded into the empty Git repository we just created. When prompted, we remember to accept the default values (or use the flag --no-interactive): kali@kali:~/kali/packages/finalrecon$ gbp import-orig ~/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz What will be the source package name? [finalrecon] What is the upstream version? [0.0~git20201107.0d41eb6] gbp:info: Importing '../upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz' to branch 'upstream'... gbp:info: Source package is finalrecon gbp:info: Upstream version is 0.0~git20201107.0d41eb6 gbp:info: Successfully imported version 0.0~git20201107.0d41eb6 of /home/kali/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz kali@kali:~/kali/packages/finalrecon$ We remember to change the default branch, from master to kali/master (as master is for upstream development), then delete the old branch. We also run a quick git branch -v to visually see the change: kali@kali:~/kali/packages/finalrecon$ git checkout -b kali/master Switched to a new branch 'kali/master' kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git branch -D master Deleted branch master (was 95b196b). kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git branch -v * kali/master bd003d7 New upstream version 0.0~git20201107.0d41eb6 pristine-tar 2413cfe pristine-tar data for finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz upstream bd003d7 New upstream version 0.0~git20201107.0d41eb6 kali@kali:~/kali/packages/finalrecon$ We can now populate the debian/ folder with its related files. We will manually specify the upstream .tar.gz file (as it is not located in ../, but instead ~/kali/upstream/). We will also set the package name to use in the same naming convention as before (<packagename>_<version> as is Debian standards). Note that we need to use the option --addmissing as there’s already a debian/ directory (we created it above for the only purpose of having a watch file). Afterwards we will remove any example files that get automatically generated, as they are not used: kali@kali:~/kali/packages/finalrecon$ dh_make --file ~/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz -p finalrecon_0.0~git20201107.0d41eb6 --addmissing --single -y Maintainer Name : Joseph O'Gorman Email-Address : [email protected] Date : Fri, 22 Apr 2022 11:33:33 +0000 Package Name : finalrecon Version : 0.0~git20201107.0d41eb6 License : blank Package Type : single Currently there is not top level Makefile. This may require additional tuning File watch.ex exists, skipping Done. Please edit the files in the debian/ subdirectory now. kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ rm -f debian/*.docs debian/README* debian/*.ex debian/*.EX kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git status On branch kali/master Untracked files: (use "git add <file>..." to include in what will be committed) debian/ nothing added to commit but untracked files present (use "git add" to track) kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ ls debian/ changelog control copyright rules source watch kali@kali:~/kali/packages/finalrecon$ At this point, we have the base packaging files in place, and it feels like a good idea to commit before starting some real work: kali@kali:~/kali/packages/finalrecon$ git add debian/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "Initial packaging files" [kali/master 52042da] Initial packaging files 6 files changed, 93 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/control create mode 100644 debian/copyright create mode 100755 debian/rules create mode 100644 debian/source/format create mode 100644 debian/watch kali@kali:~/kali/packages/finalrecon$ We can now start to edit the files in the debian/ folder to make sure the information is accurate. We can use what we found from before on FinalRecon’s GitHub to supply the correct information. To recap, we need to make sure we got the following bits of information to locate: Dependencies Description License Maintainers FinalRecon (Pip) Dependencies As there is a requirements.txt file (which is used for Python’s pip to install any Python dependencies that are required for this tool to work), we will need check to see what’s needed. For this tool to work, it requires additional software to be installed, aka dependencies. Depending on how the tool is coded, will depend on what is required (or only recommended) to be installed. FinalRecon is using various Python libraries and does not call any system commands. In Python’s eco-system, there is pip. This is Python’s package manager, which can be used to download and install any Python libraries. However, we are trying to build a package for Debian package management instead. As a result, any Python libraries need to be ported over to Debian format, in order for our package to use them (so the OS can track any files, allowing for cleaner upgrades and un-installs of packages). Lets start out by looking to see what is needed outside of the standard values, for this tool to work: kali@kali:~/kali/packages/finalrecon$ cat requirements.txt requests ipwhois bs4 lxml dnslib aiohttp aiodns psycopg2 tldextract icmplib kali@kali:~/kali/packages/finalrecon$ We then try to search for each dependency from requirements.txt in apt-cache, to make sure that we have everything in Kali Linux’ repository: kali@kali:~/kali/packages/finalrecon$ sudo apt update kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ apt-cache search ipwhois | grep -i python3 python3-ipwhois - Retrieve and parse whois data for IP addresses (Python 3) kali@kali:~/kali/packages/finalrecon$ We could search each one manually by repeating the above process for all items in requirements.txt, or we can make a quick loop to automate it. During this process, we will notice one dependency which does not have an entry (icmplib): kali@kali:~/kali/packages/finalrecon$ cat requirements.txt | while read x; do apt-cache search $x | grep -i "python3-$x -" \ || echo --MISSING $x--; done python3-requests - elegant and simple HTTP library for Python3, built for human beings python3-ipwhois - Retrieve and parse whois data for IP addresses (Python 3) python3-bs4 - error-tolerant HTML parser for Python 3 python3-lxml - pythonic binding for the libxml2 and libxslt libraries python3-dnslib - Module to encode/decode DNS wire-format packets (Python 3) python3-aiohttp - http client/server for asyncio python3-aiodns - Asynchronous DNS resolver library for Python 3 python3-psycopg2 - Python 3 module for PostgreSQL python3-tldextract - Python library for separating TLDs --MISSING icmplib-- kali@kali:~/kali/packages/finalrecon$ We can try and broaden our search for icmplib, as we were limiting output last time (by using grep): kali@kali:~/kali/packages/finalrecon$ apt-cache search icmplib | grep -i python3 kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ apt-cache search icmplib kali@kali:~/kali/packages/finalrecon$ Unfortunately it appears that Kali Linux does not have this dependency (Python’s icmplib) in the repository at this point in time. This means we will need to extend our packaging process to accommodate for packaging up icmplib as well, to allow us to completely package up FinalRecon. We will first look for icmplib in the pypi.org repository. We can easily find icmplib on PyPI along with the link to its GitHub page. If we do the same process with icmplib looking over the GitHub page as we did for FinalRecon, we can see that icmplib will not need additional dependencies (no requirements.txt file and setup.py does not list anything for install_requires) and therefore will be a relatively straightforward Python package. We can now either: Continue to package FinalRecon, before moving onto icmplib. We have to remember that we cannot successfully build a complete working package until we are done with icmplib. Pause FinalRecon packaging, and switch our focus to icmplib. We have to make sure we took detailed notes with the work we have done so far and information gathered. We will go with the former option, and continue as far as we can with FinalRecon. Editing FinalRecon Package Source Code We can now start to edit the files in the debian/ folder. Changelog We will now perform what are our standard changes (#1 (Instaloader) & #2 (Photon)) to the version, distribution and description. The resulting file should be similar to the following: kali@kali:~/kali/packages/finalrecon$ vim debian/changelog kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/changelog finalrecon (0.0~git20201107.0d41eb6-0kali1) kali-dev; urgency=medium * Initial release -- Joseph O'Gorman <[email protected]> Fri, 22 Apr 2022 11:33:33 +0700 kali@kali:~/kali/packages/finalrecon$ Control Using what we know from the information we have already gathered from GitHub and the source code, it is once again similar to our previous packaging guides (#1 (Instaloader) & #2 (Photon)). We should have a good understanding of what needs to be altered now. As there is no code that needs to be compiled, we can set Architecture: all. This is true for most Python scripts, as they are not providing Python “extensions”. If they are, they would generate a compiled .so files (e.g. psycopg2). We make sure to include the Python dependencies for building the package as well as the tool dependencies to run (the values from pip). There is one thing to note, and that is python3-icmplib. This package does not exist yet. We are adding this in for the time being as we will be creating it soon, to prevent going back and adding it we will add it now. This does mean that we will be unable to build our package until we finish with icmplib: kali@kali:~/kali/packages/finalrecon$ vim debian/control kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/control Source: finalrecon Section: misc Priority: optional Maintainer: Kali Developers <[email protected]> Uploaders: Joseph O'Gorman <[email protected]> Build-Depends: debhelper-compat (= 12), dh-python, python3-aiodns, python3-aiohttp, python3-all, python3-bs4, python3-dnslib, python3-icmplib, python3-ipwhois, python3-lxml, python3-psycopg2, python3-requests, python3-tldextract, Standards-Version: 4.5.0 Homepage: https://github.com/thewhiteh4t/FinalRecon Vcs-Browser: https://gitlab.com/kalilinux/packages/finalrecon Vcs-Git: https://gitlab.com/kalilinux/packages/finalrecon Package: finalrecon Architecture: all Depends: ${misc:Depends}, ${python3:Depends}, python3-aiodns, python3-aiohttp, python3-bs4, python3-dnslib, python3-icmplib, python3-ipwhois, python3-lxml, python3-psycopg2, python3-requests, python3-tldextract, Description: A fast and simple python script for web reconnaissance A fast and simple python script for web reconnaissance that follows a modular structure and provides detailed information on various areas. kali@kali:~/kali/packages/finalrecon$ Copyright As we have already finished getting the copyright information (license, name, contact, year and source), we now just need to add it: kali@kali:~/kali/packages/finalrecon$ vim debian/copyright kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/copyright Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: finalrecon Upstream-Contact: thewhiteh4t <[email protected]> Source: https://github.com/thewhiteh4t/FinalRecon Files: * Copyright: 2020 thewhiteh4t <[email protected]> License: MIT Files: debian/* Copyright: 2020 Joseph O'Gorman <[email protected]> License: MIT License: MIT Copyright (c) 2020 thewhiteh4t . Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: . The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. . THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. kali@kali:~/kali/packages/finalrecon$ Rules The start of the rules file will look very similar to #2 (Photon), however there is a new lower section. This part is to set the permissions on finalrecon.py, so when we call it using the symlinks (by debian/links), it will be executable: kali@kali:~/kali/packages/finalrecon$ vim debian/rules kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/rules #!/usr/bin/make -f #export DH_VERBOSE = 1 export PYBUILD_NAME=finalrecon %: dh $@ --with python3 override_dh_install: dh_install chmod 0755 debian/finalrecon/usr/share/finalrecon/finalrecon.py kali@kali:~/kali/packages/finalrecon$ Beware that the “dh” line needs to be indented by a single tabulation character, rather than spaces. Watch The watch file was already covered at the beginning of this example, and is configured to track the latest Git commit on the main branch. You can also add the common configuration for GitHub, but leave it commented out, so that whenever upstream will issue a release, everything is ready in your watch file and you’ll just need to uncomment it: kali@kali:~/kali/packages/finalrecon$ vim debian/watch kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/watch version=4 opts=mode=git,pgpmode=none \ https://github.com/thewhiteh4t/FinalRecon HEAD # Use the following when upstream starts to tag releases: #opts=filenamemangle=s/.+\/v?(\d\S+)\.tar\.gz/finalrecon-$1\.tar\.gz/ \ # https://github.com/thewhiteh4t/FinalRecon/tags .*/v?(\d\S+)\.tar\.gz kali@kali:~/kali/packages/finalrecon$ Whereas last time (#1 (Instaloader) & #2 (Photon)), we are not going to use a “helper-script” but instead create a symlink pointing to the main Python file, which will still be in $PATH: kali@kali:~/kali/packages/finalrecon$ vim debian/links kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/links usr/share/finalrecon/finalrecon.py usr/bin/finalrecon kali@kali:~/kali/packages/finalrecon$ .Install We can now create the install file, which is required to say what files go where on the system during the unpacking of the package. We need to make sure to include everything from the root of the package directory: kali@kali:~/kali/packages/finalrecon$ vim debian/finalrecon.install kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/finalrecon.install conf usr/share/finalrecon/ finalrecon.py usr/share/finalrecon/ modules usr/share/finalrecon/ wordlists usr/share/finalrecon/ kali@kali:~/kali/packages/finalrecon$ There is not a leading slash on the destination directory Patches For this tool we will need to also implement a patch to disable the update and dependency checker. If the program self updates, the system will not be aware of any additional files outside of the package, so things then start to get messy. The dependency is also being handled by our package now instead. Knowing you need to do this, comes with either knowing the tool, or auditing the source code. The patch process looks like the following (for more information see our previous guide, #2 (Photon)): kali@kali:~/kali/packages/finalrecon$ gbp pq import gbp:info: Trying to apply patches at 'f1c4c9f8d25224186749ce69a9f403f207feda03' gbp:info: 0 patches listed in 'debian/patches/series' imported on 'patch-queue/kali/master' kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ vim finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git add finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "disable requirements check" [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ vim finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git add finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "disable ver_check" [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ gbp pq export gbp:info: On 'patch-queue/kali/master', switching to 'kali/master' gbp:info: Generating patches from git (kali/master..patch-queue/kali/master) kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git branch -v * kali/master bd003d7 New upstream version 0.0~git20201107.0d41eb6 patch-queue/kali/master 2935f22 disable ver_check pristine-tar 2413cfe pristine-tar data for finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz upstream bd003d7 New upstream version 0.0~git20201107.0d41eb6 kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ ls debian/patches/ disable-requirements-check.patch disable-ver_check.patch series kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/patches/disable-requirements-check.patch From: Joseph O'Gorman <[email protected]> Subject: disable requirements check --- finalrecon.py | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/finalrecon.py b/finalrecon.py index 735f40b..95e99f1 100644 --- a/finalrecon.py +++ b/finalrecon.py @@ -26,22 +26,22 @@ else: path_to_script = os.path.dirname(os.path.realpath(__file__)) -with open(path_to_script + '/requirements.txt', 'r') as rqr: - pkg_list = rqr.read().strip().split('\n') - -print('\n' + G + '[+]' + C + ' Checking Dependencies...' + W + '\n') - -for pkg in pkg_list: - spec = importlib.util.find_spec(pkg) - if spec is None: - print(R + '[-]' + W + ' {}'.format(pkg) + C + ' is not Installed!' + W) - fail = True - else: - pass -if fail == True: - print('\n' + R + '[-]' + C + ' Please Execute ' + W + 'pip3 install -r requirements.txt' + C + ' to Install Missing Packages' + W + '\n') - os.remove(pid_path) - sys.exit() +#with open(path_to_script + '/requirements.txt', 'r') as rqr: +# pkg_list = rqr.read().strip().split('\n') + +#print('\n' + G + '[+]' + C + ' Checking Dependencies...' + W + '\n') + +#for pkg in pkg_list: +# spec = importlib.util.find_spec(pkg) +# if spec is None: +# print(R + '[-]' + W + ' {}'.format(pkg) + C + ' is not Installed!' + W) +# fail = True +# else: +# pass +#if fail == True: +# print('\n' + R + '[-]' + C + ' Please Execute ' + W + 'pip3 install -r requirements.txt' + C + ' to Install Missing Packages' + W + '\n') +# os.remove(pid_path) +# sys.exit() import argparse kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/patches/disable-requirements-check.patch From: Joseph O'Gorman <[email protected]> Subject: disable ver_check --- finalrecon.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/finalrecon.py b/finalrecon.py index 95e99f1..d21877c 100644 --- a/finalrecon.py +++ b/finalrecon.py @@ -207,7 +207,7 @@ def full_recon(): try: fetch_meta() banner() - ver_check() + #ver_check() if target.startswith(('http', 'https')) == False: print(R + '[-]' + C + ' Protocol Missing, Include ' + W + 'http://' + C + ' or ' + W + 'https://' + '\n') kali@kali:~/kali/packages/finalrecon$ Runtime Test The runtime test process looks like the following (for more information see our previous guide, #2 (Photon)). Just like last time, we will just create a minimal test to look for the help screen: kali@kali:~/kali/packages/finalrecon$ mkdir -p debian/tests/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ vim debian/tests/control kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/tests/control Test-Command: finalrecon -h Restrictions: superficial kali@kali:~/kali/packages/finalrecon$ Completing dependencies In theory, we should have a complete working package now with the exception of the missing icmplib dependency. So we now need to package up icmplib, before trying to finally build FinalRecong. icmplib Naming Packages Unlike our previous guides (#1 (Instaloader) & #2 (Photon)) where we use the same name for both source package and binary package, this time we will differ them. The naming convention for a binary package is python3-<package>, which is important to follow as it has a impact at a technical level. However a source package can be python-<package> (or even just <package>). It does not matter if this is not followed as it will not break anything if its not followed. However, from a Kali team point of view we prefer and will use python-<package>. See this Debian resource for more information. Cheat Sheet Packaging This package is straightforward using python3-setuptools (like in our first guide (Instaloader)), so to prevent this guide from getting too long, we will not be going step by step for icmplib. For more information on building Python libraries, see the Debian resource Here is a quick overview of the commands needed to build the package: mkdir -p ~/kali/upstream/ ~/kali/build-area/ ~/kali/packages/python-icmplib/ wget https://github.com/ValentinBELYN/icmplib/archive/v1.2.2.tar.gz -O ~/kali/upstream/python-icmplib_1.2.2.orig.tar.gz cd ~/kali/packages/python-icmplib/ git init gbp import-orig ~/kali/upstream/python-icmplib_1.2.2.orig.tar.gz --no-interactive --debian-branch=kali/master dh_make --file ~/kali/upstream/python-icmplib_1.2.2.orig.tar.gz -p python-icmplib_1.2.2 --python -y rm -f debian/{*.docs,README*,*.ex,*.EX} vim debian/changelog vim debian/control vim debian/copyright vim debian/rules vim debian/watch gbp buildpackage --git-builder=sbuild --git-export=WC sudo dpkg -i ~/kali/build-area/python3-icmplib_1.2.2-0kali1_all.deb pip search icmplib git add debian/ git commit -m "Initial release" Previewing the contents of the key filesin debian/: Changelog Straight forward, like all the other guides, #1 (Instaloader) & #2 (Photon), edit version, distribution and description. Note, python-icmplib needs to match the source name in debian/control: kali@kali:~/kali/packages/python-icmplib$ cat debian/changelog python-icmplib (1.2.2-0kali1) kali-dev; urgency=medium * Initial release -- Joseph O'Gorman <[email protected]> Mon, 12 Oct 2020 18:10:27 -0400 kali@kali:~/kali/packages/python-icmplib$ Control This is a bit different to what we have seen previously with Section: python. This is because its a Python library. For more information see Debian’s write up as well as the different options. We also need to name the package differently. The source package part of debian/control is the top part, which gets named with the Source: field, whereas the binary part the lower half and uses Package: to name. Were possible Kali Linux will always try and do both a source and binary package (See the Debian resource for more information). Note, the source name python-icmplib needs to match in debian/changelog: kali@kali:~/kali/packages/python-icmplib$ cat debian/control Source: python-icmplib Section: python Priority: optional Maintainer: Kali Developers <[email protected]> Uploaders: Joseph O'Gorman <[email protected]> Build-Depends: debhelper-compat (= 12), dh-python, python3-all, python3-setuptools Standards-Version: 4.5.0 Homepage: https://github.com/ValentinBELYN/icmplib Vcs-Browser: https://gitlab.com/kalilinux/packages/python-icmplib Vcs-Git: https://gitlab.com/kalilinux/packages/python-icmplib.git Package: python3-icmplib Architecture: all Depends: ${python3:Depends}, ${misc:Depends} Description: Python tool to forge ICMP packages icmplib is a brand new and modern implementation of the ICMP protocol in Python Able to forge ICMP packages to make your own ping, multiping, traceroute etc kali@kali:~/kali/packages/python-icmplib$ Copyright As we renamed the orig.tar.gz, upstream name is incorrect, as it normally would not have a leading python3-. We can get this from the source URL: kali@kali:~/kali/packages/python-icmplib$ cat debian/copyright Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: icmplib Upstream-Contact: Valentin BELYN <[email protected]> Source: https://github.com/ValentinBELYN/icmplib Files: * Copyright: 2020 Valentin BELYN <[email protected]> License: LGPL-3+ Files: debian/* Copyright: 2020 Joseph O'Gorman <[email protected]> License: LGPL-3+ License: LGPL-3+ This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. . This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. . You should have received a copy of the GNU Lesser General Public License along with this program; if not, see <https://www.gnu.org/licenses/>. . On Debian systems, the full text of the GNU Lesser General Public License version 3 can be found in the file `/usr/share/common-licenses/LGPL-3'. kali@kali:~/kali/packages/python-icmplib$ Rules We need to make sure to drop any leading python- when being defined in PYBUILD_NAME, even though the binary package which gets produced (as defined in debian/control) will be python3-icmplib. This is because of PyBuild, only wanting the Python module name: kali@kali:~/kali/packages/python-icmplib$ cat debian/rules #!/usr/bin/make -f #export DH_VERBOSE = 1 export PYBUILD_NAME=icmplib %: dh $@ --with python3 --buildsystem=pybuild kali@kali:~/kali/packages/python-icmplib$ Watch Straight forward, like all the other guides, #1 (Instaloader) & #2 (Photon), using the Debian standard watch file for GitHub: kali@kali:~/kali/packages/python-icmplib$ cat debian/watch version=4 opts=filenamemangle=s/.+\/v?(\d\S+)\.tar\.gz/icmplib-$1\.tar\.gz/ \ https://github.com/ValentinBELYN/icmplib/tags .*/v?(\d\S+)\.tar\.gz kali@kali:~/kali/packages/python-icmplib$ We have successfully managed to build a Python 3 library file, icmplib! Final FinalRecon Build As we may not have pushed out, had python3-icmplib being accepted yet into Kali Linux, or you may want to submit both at the same time, we can include the recently generated package in the chroot for sbuild to use, it is a listed requirement for FinalRecon. We are also unsure about the status of the package, we may not want to commit the latest edits to Git. So we will add --git-export=WC when building the package: kali@kali:~/kali/packages/python-icmplib$ cd ~/kali/packages/finalrecon/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ gbp buildpackage \ --git-builder=sbuild --git-export=WC \ --extra-package=$HOME/kali/build-area/python3-icmplib_1.2.2-0kali1_all.deb [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ ls -lah ~/kali/build-area/finalrecon_*.deb -rw-rw-r-- 1 kali kali 83K Nov 8 07:44 /home/kali/kali/build-area/finalrecon_0.0~git20201107.0d41eb6-0kali1_all.deb kali@kali:~/kali/packages/finalrecon$ Before we try to test our newly generated package, we remember that in debian/control we listed a few dependencies (not only to build the package but to run the package). Using dpkg, it will not satisfy these requirements, so we need to manually install them first. We can check what is missing from our operating system, by doing: kali@kali:~/kali/packages/finalrecon$ dpkg-checkbuilddeps dpkg-checkbuilddeps: error: Unmet build dependencies: python3-ipwhois python3-dnslib python3-aiohttp python3-aiodns python3-psycopg2 python3-tldextract kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ sudo apt -y build-dep . [...] kali@kali:~/kali/packages/finalrecon$ Our package has been built and dependencies have been installed. Its now time to finally install FinalRecon: kali@kali:~/kali/packages/finalrecon$ sudo dpkg -i ~/kali/build-area/finalrecon_*.deb [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ dpkg -l | grep final ii finalrecon 0.0~git20201107.0d41eb6-0kali1 all A fast and simple python script for web reconnaissance kali@kali:~/kali/packages/finalrecon$ We have successfully managed to build FinalRecon as a package! Let’s test to make sure it works: kali@kali:~/kali/packages/finalrecon$ finalrecon usage: finalrecon [-h] [--headers] [--sslinfo] [--whois] [--crawl] [--dns] [--sub] [--trace] [--dir] [--ps] [--full] [-t T] [-T T] [-w W] [-r] [-s] [-sp SP] [-d D] [-e E] [-m M] [-p P] [-tt TT] [-o O] url finalrecon: error: the following arguments are required: url kali@kali:~/kali/packages/finalrecon$ Saving Our Work At this point, we can save the work we have put in: kali@kali:~/kali/packages/finalrecon$ git add debian/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "Initial release" [kali/master d1c9f75] Initial release 12 files changed, 169 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/control create mode 100644 debian/copyright create mode 100644 debian/finalrecon.install create mode 100644 debian/links create mode 100644 debian/patches/disable-requirements-check.patch create mode 100644 debian/patches/disable-ver_check.patch create mode 100644 debian/patches/series create mode 100755 debian/rules create mode 100644 debian/source/format create mode 100644 debian/tests/control create mode 100644 debian/watch kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git status On branch kali/master nothing to commit, working tree clean kali@kali:~/kali/packages/finalrecon$ We can now finish up the packaging by putting in a request on the Kali Linux bug tracker for these packages to be added! Message From Kali Team
8585
dbpedia
0
44
https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/
en
How to Install Python Packages for AWS Lambda Layers?
https://media.geeksforge…_200x200-min.png
https://media.geeksforge…_200x200-min.png
[ "https://media.geeksforgeeks.org/gfg-gg-logo.svg", "https://media.geeksforgeeks.org/auth-dashboard-uploads/Google-news.svg", "https://media.geeksforgeeks.org/wp-content/uploads/20220310181416/Hnetcomimage1.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310165121/11.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310181823/event.PNG", "https://media.geeksforgeeks.org/wp-content/uploads/20220310182003/4.PNG", "https://media.geeksforgeeks.org/wp-content/uploads/20220310165350/31.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310165448/41.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310192812/dock1.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310195856/dock3.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310165642/27.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310165820/26.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220223192008/25.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310200711/docker.PNG", "https://media.geeksforgeeks.org/wp-content/uploads/20220310201819/layer1171.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310202054/mypa.PNG", "https://media.geeksforgeeks.org/wp-content/uploads/20220223192831/11.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310171404/124.jpg", "https://media.geeksforgeeks.org/wp-content/uploads/20220310165944/13.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310202626/createlayer.PNG", "https://media.geeksforgeeks.org/wp-content/uploads/20220310203015/laye71.png", "https://media.geeksforgeeks.org/wp-content/uploads/20220310170056/23.PNG", "https://media.geeksforgeeks.org/wp-content/uploads/20220310170132/24.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/Google-news.svg", "https://media.geeksforgeeks.org/auth-dashboard-uploads/new-premium-rbanner-us.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/gfgFooterLogo.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/googleplay.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/appstore.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/suggestChangeIcon.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/createImprovementIcon.png" ]
[]
[]
[ "Data Structures", "Algorithms", "Python", "Java", "C", "C++", "JavaScript", "Android Development", "SQL", "Data Science", "Machine Learning", "PHP", "Web Development", "System Design", "Tutorial", "Technical Blogs", "Interview Experience", "Interview Preparation", "Programming", "Competitive Programming", "Jobs", "Coding Contests", "GATE CSE", "HTML", "CSS", "React", "NodeJS", "Placement", "Aptitude", "Quiz", "Computer Science", "Programming Examples", "GeeksforGeeks Courses", "Puzzles", "SSC", "Banking", "UPSC", "Commerce", "Finance", "CBSE", "School", "k12", "General Knowledge", "News", "Mathematics", "Exams" ]
null
[ "GeeksforGeeks" ]
2022-03-22T12:00:44
A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
en
https://media.geeksforge…/gfg_favicon.png
GeeksforGeeks
https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/
AWS Lambda Layer is a zip file archive that contains the required additional code (libraries, dependencies, or custom runtimes) or data to run your AWS Lambda function. AWS Lambda function can then pull this required content in form of lambda layers. When an AWS Lambda function is invoked, the layer with all the dependencies is loaded along with it during the runtime. Why do we need lambda layers? AWS Lambda function supports only some standard libraries during its runtime. Therefore it becomes problematic when you have to use external libraries (for example pandas) with your lambda function. In such cases, we can make use of lambda layers or a deployment package. But using a lambda layer as compared to a deployment package is rather useful. Advantages of using lambda layers Using AWS Lambda Layers has the following benefits: Reusability: One lambda layer can be used across many different AWS Lambda functions. Code-sharing: Lambda layers enable us to share the common code or functions, libraries, and dependencies among various lambda functions. Using Lambda layers helps you focus on your main code or business logic. Additionally, it helps keep your Lambda function code smaller. Using Lambda layers helps reduce deployment package size. If there is a need to update your common code or any dependency you can do so in one place rather than making changes in individual lambda functions. Since lambda layers provide a feature to store different versions you can use the older version of a package or a new version as per the requirements. Note: A lambda function can have up to 5 layers. In this tutorial, we will see how to install python packages for AWS Lambda Layers. Note that regardless of which python package you want to use with your lambda functions, the below steps will be the same. Steps to add python packages in AWS lambda layers Step 1: Go to the AWS management console. Step 2: Click on create function. Step 3: Create a lambda function named “mylambda” Step 4: Choose Python 3.9 and x86_64 architecture and click on create a function Step 5: Now try importing the requests module in your lambda function. So, create an event named “myevent” by clicking the down arrow on the test button. Step 6: Deploy the function. Now click on test. As soon as you click on the test, you will see an error message. To create a lambda layer we need to create a zip file containing all the dependencies for the ‘requests’ package and upload it to our layer. To create this zip file we will make use of docker. Why docker? Since lambda uses the Amazon Linux environment, if you are using windows and create a zip file of dependencies it might not work while you run your lambda function. After you finish setting up docker, open the command prompt and run: docker run -it ubuntu The flag “-it” is used to open an interactive shell. Note: If you get an error after running the above command check if you have an ubuntu image. To check for the docker images, use the command: docker images Now run the following commands to update, install the required Python version and install pip. apt update apt install python3.9 apt install python3-pip Since we also have to make a zip file afterward, install zip. apt install zip Create a directory where we want to install our requests package. mkdir -p layer/python/lib/python3.9/site-packages This will create a folder named: “layer”. Finally, install the requests package by using the command: pip3 install requests -t layer/python/lib/python3.9/site-packages/ Now go to the “layer” folder cd layer If you do ‘ls’ you will see a folder named python here. Now create the zip folder of the installed package in the layer directory. zip -r mypackage.zip * Now we have to copy the zip file mypackage.zip to our local folder. To do that, open a new command prompt and get the container ID by running: docker ps -a Now use the below command to copy the zip file from your container to a local folder. Format: docker cp <Container-ID:path_of_zip_file> <path_where_you_want_to_copy> Example: docker cp 7cdd497f0560:/layer/mypackage.zip C:\Users\lenovo\Desktop\layer Now you will have a ‘mypackage.zip’ file in the path you described. Now let’s create a lambda layer. On the left side of the console click on layers Click on create a layer button. Name your layer as “mylayer”. Notice that you have an option to upload a zip file or upload a file from amazon s3. If files are larger then upload them on s3 and give the link to the zip file. In this tutorial, we will directly upload it as a zip file. Choose compatible architecture as: x86_64. Since we selected the same while creating our lambda function. And choose compatible runtime as python3.9, upload zip file, and click on create. A lambda layer will be successfully created. Now we just need to attach this with our lambda function. If you are creating the layer for the first time your version number will be reflected as 1. (Value of Lambda Layer version is immutable that is the version number is incremented by 1 each time you create a new layer). Navigate back to the lambda function. Scroll down to the bottom and click on add a layer (Under Layers section). Click on the custom layer and select ‘mylayer’, select the version and click on add. Now it’s time to test it! Click on test. Your lambda function will now run successfully! Some important points: The unzipped files from the lambda layer will be present in the /opt directory in the Lambda runtime. You can also use AWS Cloud9 environment to create the zip file instead of docker.
8585
dbpedia
3
11
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_and_using_dynamic_programming_languages/assembly_packaging-python-3-rpms_installing-and-using-dynamic-programming-languages
en
Chapter 3. Packaging Python 3 RPMs
[ "https://docs.redhat.com/Logo-Red_Hat-Documentation-A-Reverse-RGB.svg", "https://docs.redhat.com/Logo-Red_Hat-Documentation-A-Reverse-RGB.svg" ]
[]
[]
[ "" ]
null
[]
null
Chapter 3. Packaging Python 3 RPMs | Red Hat Documentation
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_and_using_dynamic_programming_languages/assembly_packaging-python-3-rpms_installing-and-using-dynamic-programming-languages
download PDF You can install Python packages on your system either from the upstream PyPI repository using the pip installer, or using the DNF package manager. DNF uses the RPM package format, which offers more downstream control over the software. The packaging format of native Python packages is defined by Python Packaging Authority (PyPA) Specifications. Most Python projects use the distutils or setuptools utilities for packaging, and defined package information in the setup.py file. However, possibilities of creating native Python packages have evolved over time. For more information about emerging packaging standards, see pyproject-rpm-macros. This chapter describes how to package a Python project that uses setup.py into an RPM package. This approach provides the following advantages compared to native Python packages: Dependencies on Python and non-Python packages are possible and strictly enforced by the DNF package manager. You can cryptographically sign the packages. With cryptographic signing, you can verify, integrate, and test content of RPM packages with the rest of the operating system. You can execute tests during the build process. 3.1. SPEC file description for a Python package A SPEC file contains instructions that the rpmbuild utility uses to build an RPM. The instructions are included in a series of sections. A SPEC file has two main parts in which the sections are defined: Preamble (contains a series of metadata items that are used in the Body) Body (contains the main part of the instructions) An RPM SPEC file for Python projects has some specifics compared to non-Python RPM SPEC files. Important A name of any RPM package of a Python library must always include the python3-, python3.11-, or python3.12- prefix. Other specifics are shown in the following SPEC file example for the python3*-pello package. For description of such specifics, see the notes below the example. An example SPEC file for the pello program written in Python %global python3_pkgversion 3.11 1 Name: python-pello 2 Version: 1.0.2 Release: 1%{?dist} Summary: Example Python library License: MIT URL: https://github.com/fedora-python/Pello Source: %{url}/archive/v%{version}/Pello-%{version}.tar.gz BuildArch: noarch BuildRequires: python%{python3_pkgversion}-devel 3 # Build dependencies needed to be specified manually BuildRequires: python%{python3_pkgversion}-setuptools # Test dependencies needed to be specified manually # Also runtime dependencies need to be BuildRequired manually to run tests during build BuildRequires: python%{python3_pkgversion}-pytest >= 3 %global _description %{expand: Pello is an example package with an executable that prints Hello World! on the command line.} %description %_description %package -n python%{python3_pkgversion}-pello 4 Summary: %{summary} %description -n python%{python3_pkgversion}-pello %_description %prep %autosetup -p1 -n Pello-%{version} %build # The macro only supported projects with setup.py %py3_build 5 %install # The macro only supported projects with setup.py %py3_install %check 6 %{pytest} # Note that there is no %%files section for the unversioned python module %files -n python%{python3_pkgversion}-pello %doc README.md %license LICENSE.txt %{_bindir}/pello_greeting # The library files needed to be listed manually %{python3_sitelib}/pello/ # The metadata files needed to be listed manually %{python3_sitelib}/Pello-*.egg-info/ 1 By defining the python3_pkgversion macro, you set which Python version this package will be built for. To build for the default Python version 3.9, either set the macro to its default value 3 or remove the line entirely. 2 When packaging a Python project into RPM, always add the python- prefix to the original name of the project. The original name here is pello and, therefore, the name of the Source RPM (SRPM) is python-pello. 3 BuildRequires specifies what packages are required to build and test this package. In BuildRequires, always include items providing tools necessary for building Python packages: python3-devel (or python3.11-devel or python3.12-devel) and the relevant projects needed by the specific software that you package, for example, python3-setuptools (or python3.11-setuptools or python3.12-setuptools) or the runtime and testing dependencies needed to run the tests in the %check section. 4 When choosing a name for the binary RPM (the package that users will be able to install), add a versioned Python prefix. Use the python3- prefix for the default Python 3.9, the python3.11- prefix for Python 3.11, or the python3.12- prefix for Python 3.12. You can use the %{python3_pkgversion} macro, which evaluates to 3 for the default Python version 3.9 unless you set it to an explicit version, for example, 3.11 (see footnote 1). 5 The %py3_build and %py3_install macros run the setup.py build and setup.py install commands, respectively, with additional arguments to specify installation locations, the interpreter to use, and other details. 6 The %check section should run the tests of the packaged project. The exact command depends on the project itself, but it is possible to use the %pytest macro to run the pytest command in an RPM-friendly way. 3.2. Common macros for Python 3 RPMs In a SPEC file, always use the macros that are described in the following Macros for Python 3 RPMs table rather than hardcoding their values. You can redefine which Python 3 version is used in these macros by defining the python3_pkgversion macro on top of your SPEC file (see Section 3.1, “SPEC file description for a Python package”). If you define the python3_pkgversion macro, the values of the macros described in the following table will reflect the specified Python 3 version. Table 3.1. Macros for Python 3 RPMsMacroNormal DefinitionDescription Additional resources Python macros in upstream documentation 3.3. Using automatically generated dependencies for Python RPMs The following procedure describes how to use automatically generated dependencies when packaging a Python project as an RPM. Prerequisites A SPEC file for the RPM exists. For more information, see SPEC file description for a Python package. Procedure Make sure that one of the following directories containing upstream-provided metadata is included in the resulting RPM: .dist-info .egg-info The RPM build process automatically generates virtual pythonX.Ydist provides from these directories, for example: python3.9dist(pello) The Python dependency generator then reads the upstream metadata and generates runtime requirements for each RPM package using the generated pythonX.Ydist virtual provides. For example, a generated requirements tag might look as follows: Requires: python3.9dist(requests) Inspect the generated requires. To remove some of the generated requires, use one of the following approaches: Modify the upstream-provided metadata in the %prep section of the SPEC file. Use automatic filtering of dependencies described in the upstream documentation. To disable the automatic dependency generator, include the %{?python_disable_dependency_generator} macro above the main package’s %description declaration. Additional resources Automatically generated dependencies
8585
dbpedia
0
13
https://betterscientificsoftware.github.io/python-for-hpc/tutorials/python-pypi-packaging/
en
Python: Creating a pip installable package
[ "https://betterscientificsoftware.github.io/python-for-hpc//images/day_and_night.svg" ]
[]
[]
[ "" ]
null
[ "Stephen Hudson" ]
null
Creating a pip installable package for PyPI
en
Python for HPC: Community Materials
https://betterscientificsoftware.github.io//python-for-hpc/tutorials/python-pypi-packaging/
Introduction What is pip? Creating a Python package Creating a source distribution Creating a wheel distribution Testing and Publishing package on PyPI Uploading to testpypi Uploading to PyPI Downloading tarball without install Example projects Feedback Introduction This is a quickstart guide to Python Packaging with a particular focus on the creation of a PyPI package, which will enable users to “pip install” the package. The document is broken down into sections so that readers may easily skips parts of the process they are already familiar with. All but the final section (Uploading to PyPI), can be undertaken as an exercise to understand Python packaging and test the process, without publishing a package on the formal PyPI distribution. For a more detailed reference on package creation, see the official Python Packaging Authority (PyPA) website. Note: PyPI should be pronounced “pie P I” to avoid confusion with pypy (a Python implementation). What is pip? pip is a package management system, specifically designed for installing Python packages from from the internet hosted Python Package Index (commonly known as PyPI). It is the most common way to install Python packages. E.g. The package can now be imported in Python scripts. You may need to run as sudo if you have root privileges, or append --user to install under your home directory (often this will be under $HOME/.local). Note: pip3 is used to install Python3 packages, however in some environments the command pip may point to pip3, just as python may point to Python3. You can use which pip to check this. For this document, examples will show the command simply as pip. Tip: To download a specific version of a package: Tip: To find out what versions are available: This is essentially trying to install a version that does not exist and causes pip to list available versions. Tip: To see what version is currently installed: Package information, including install location, can be obtained by running the Python interpreter: Installing pip is easy: https://pip.pypa.io/en/stable/installing Creating a Python package This article gives an overview of how to create an installable Python package. Note on Ambiguity: The term package can refer to an installable python package within a project (a directory containing an __init__.py file). It can also mean a distribution package which refers to the entire distributed part of the project (as in a source distribution - or “tarball”). Such a package may consist of multiple python package/sub-packages. In most cases the context should be sufficient to make the distinction. A Python project will consist of a root directory with the name of the project. Somewhere inside this will be included a directory which will constitute the main installable package. Most often this has the same name as the project (this is not compulsory but makes things a bit simpler). Inside that package directory, alongside your python files, create a file called __init__.py. This file can be empty, and it denotes the directory as a python package. When you pip install, this directory will be installed and become importable. E.g. A simple project may have this structure: pyexample ├── LICENSE ├── pyexample │ ├── __init__.py │ ├── module_mpi4py_1.py │ ├── module_numpy_1.py │ └── module_numpy_2.py ├── README.rst └── setup.py At the root directory, you will need a setup.py file, which will govern the installation of your package. The setuptools package is recommended for this (the in-built distutils is an older alternative). The main requirement in setup.py is to call the setup routine, providing project information as keyword arguments. A lot of information can be provided, but the following is a minimalist example. Further information on setup options can be found at: PyPA packaging instructions and yet more detailed and up to date information at: The setuptools command reference: The classifiers are not functional, they are for documentation, and will be listed on the PYPI page, once uploaded. It is conventional to include the Python versions supported in this release. A complete list of classifiers is available at: PyPI classifiers list Having created a setup.py, test the install with pip. In root dir: This is recommended in place of the default python setup.py install which uses easy_install. If you have an existing install, and want to ensure package and dependencies are updated use --upgrade To uninstall (use package name): Note: A reliable clean uninstall is one advantage of using setuptools over distutils. It is worth noting that the version in your setup.py will not provide the package attribute __version__. A common place to provide this along with other meta-data for the package is inside the __init__.py. This is run whenever the module is imported. E.g: __init__.py may contain: If you now pip install again and run the Python interpreter you should be able to access these variables: This does create the problem of having two places holding the version, which must also match any release tags created (eg. in git). Various approaches exist for using a single version number. See https://packaging.python.org/guides/single-sourcing-package-version If you wish to create sub-packages, these should ideally be directories inside the main package (Re-mapping from other locations is possible using the package_dir argument in setup but this can cause a problem with develop installs. The sub-packages also require an __init__.py in the directory. Creating a source distribution It is recommended that all Python projects provide a source distribution. PyPI has certain required meta-data that the setup.py should provide. To quickly check if your project has this data use: If nothing is reported your package is acceptable. Create a source distribution. From your root directory: This creates a dist/ directory containing a compressed archive of the package (e.g. <PACKAGE_NAME>-<VERSION>.tar.gz in Linux). This file is your source distribution. If it does not automatically contain what you want, then you might consider using a MANIFEST file (see https://docs.python.org/distutils/sourcedist). Note: A <PACKAGE_NAME>.egg-info directory will also be created in your root directory containing meta-data about your distribution. This can safely be deleted if it is not wanted (despite the extension, this is generated even though you have not built an egg format package). Creating a wheel distribution Optionally you may create a wheel distribution. This is a built distribution for the current platform. Wheels should be used in place of the older egg format. Bear in mind, any extensions will be built for the given platform and as such this must be consistent with any other project dependencies. Wheels will speed up installation if you have compiled code extensions as the build step is not required. If you do not have the wheel package you can pip install it. There are different types of wheels. However, if your project is pure python and python2/3 compatible create a universal wheel: If it is not python2/3 compatible or contains compiled extensions just use: The installable wheel will be created under the dist/ directory. A build directory will also be created with the built code. Further details for building wheels can be found here: https://packaging.python.org/tutorials/distributing-packages Testing and Publishing package on PyPI Distributing the package on PyPI will enable anyone on-line to pip install the package. First you must set up an account on PyPI. If you are going to test your package on the PyPI test site you will need to set up an account there also. This is easy. Create an account on PYPI: Go to: https://pypi.python.org and select Register. Follow instructions. Create an account on testpypi: Go to: https://testpypi.python.org and select Register. Follow instructions. You will also need a version number. Semantic versioning is recommended (see https://semver.org for details). The standard starting version for a project in development is 0.1.0. The best approach to uploading to PyPI is to use twine. IMPORTANT: First you can test your upload using the PyPI test site. It is highly recommended that you do this and test installing your package as below. NOTE: Once you upload a package to PYPI it is possible to remove it, but you cannot upload another package with the same version number – this breaks the version contract. It is therefore, especially prudent to test with testpypi first. Note that anything you put on testpypi should be considered disposable as the site regularly prunes content. Uploading to testpypi This section shows how to upload a source distribution of your package. Further documentation at: https://packaging.python.org/guides/using-testpypi Note: This link includes the option of using a pypirc file to abbreviate some of the command lines below. A source distribution provides everything needed to build/install the package on any supported platform. Testsuites, documentation and supporting data can also be included. You can now upload your package to testpypi as follows. Assuming your source distribution under dist/ is called pyexample-0.1.0.tar.gz: Alternatively, the following line will upload all your generated distrbutions under the dist/ directory. This may be used if you create wheels (see below) in addition to a source distribution. You will be requested to give your username and password for your testpypi account. Option: You have the option to digitally sign your package when you upload. You will need a gpg key set up to do this. It should be noted, however, that pip does not currently check gpg signatures when installing - this has to be done manually. To digitally sign using your gpg key (e.g. for package pyexample at version 0.1.0): A file pyexample-0.1.0.tar.gz.asc will be created. Now upload: Note: --detach-sign means you are writing the signature into a separate file *.asc The package should now be uploaded to: https://testpypi.python.org/pypi Note how the info/classifiers you supplied in setup.py are shown on the page. You can now test pip install from the command line. E.g. To install package pyexample into your user install space: Uploading to PyPI Once you are happy with the repository in testpyi, uploading to PYPI will be the same command line process, but without having to specify the url arguments. For example, the steps above would simply become: E.g. To upload all distributions created under dist/ E.g. To upload the source distribution with a gpg signature: You package should now be uploaded to: https://pypi.python.org/pypi The package should pip install. E.g: It is also recommended that you use virtual environments to test installing dependencies from scratch and for trying out different python versions. Check required flags to ensure your environment is isolated. E.g. For Virtualenv use the flag --no-site-packages. For Conda, set the environment variable export PYTHONNOUSERSITE=1 before activating you environment. Packages that are explicitly linked through PYTHONPATH will still be found however. Downloading tarball without install To test downloading a source distribution (no install) with dependencies: Or just the package without dependencies: Downloading the source distribution is a good way to check that it includes what you want by default. If not, then consider adding a MANIFEST file, which instructs setuptools what to include in the source distribution. Example projects pyexample: A small sample project using numpy and mpi4py (used as example above). Location: Github Note: To run the mpi4py test use at least 2 processors: mpiexec -np 2 python module_mpi4py_1.py libEnsemble: An Argonne project for running ensembles of calculations. Location: Github PyPI Related content includes: setup.py includes mapping a different source directory structure to packages and sub-packages using the package_dir setup argument. Use of a MANIFEST file to specify source distribution. Feedback Any feedback/corrections/additions are welcome: If this was helpful, please leave a star on the github page. Leave a comment below. Email: shudson@anl.gov Or fork on github and make a pull request
8585
dbpedia
2
0
https://en.wikipedia.org/wiki/Autopackage
en
Autopackage
https://upload.wikimedia…package-logo.png
https://upload.wikimedia…package-logo.png
[ "https://en.wikipedia.org/static/images/icons/wikipedia.png", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Autopackage-logo.png/120px-Autopackage-logo.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Autopackage_ready_to_install_software.png/220px-Autopackage_ready_to_install_software.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Autopackage_installing_software.png/250px-Autopackage_installing_software.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/28px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/NewTux.svg/13px-NewTux.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/16px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/d/db/Symbol_list_class.svg/16px-Symbol_list_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/16px-Symbol_category_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/9c/Symbol_file_class.svg/16px-Symbol_file_class.svg.png", "https://login.wikimedia.org/wiki/Special:CentralAutoLogin/start?type=1x1", "https://en.wikipedia.org/static/images/footer/wikimedia-button.svg", "https://en.wikipedia.org/static/images/footer/poweredby_mediawiki.svg" ]
[]
[]
[ "" ]
null
[ "Contributors to Wikimedia projects" ]
2005-05-14T12:01:14+00:00
en
/static/apple-touch/wikipedia.png
https://en.wikipedia.org/wiki/Autopackage
Linux package management system AutopackageOriginal author(s)Mike HearnDeveloper(s)Jan Niklas HasseInitial releaseAround 2002; 22 years ago ( )Stable release 1.4.2[1] / May 24, 2009; 15 years ago ( ) Written inBash, C, C++ and PythonOperating systemLinuxTypePackage management systemLicenseGNU Lesser General Public LicenseWebsiteautopackage.org at the Wayback Machine (archive index) Autopackage at Google Project Hosting Autopackage is a free computer package management system aimed at making it simple to create a package that can be installed on all Linux distributions, created by Mike Hearn around 2002. In August 2010, Listaller and Autopackage announced that the projects will merge.[2] Projects such as aMSN and Inkscape offered an Autopackage installer, and Freecode offered content submitters a field to put the URL of Autopackages. The list of available packages is very limited, and most program versions are obsolete (for example, the most recent Autopackage of GIMP is 2.2.6, even though GIMP is now at version 2.8.2, as of August 2012).[3][4] Methodology [edit] Autopackage was designed for installing binary, or pre-compiled, versions of non-core applications such as word processors, web browsers, and personal computer games, rather than core libraries and applications such as operating system shells. Concept of autopackage was to "improve" Linux to a desktop platform, with stable binary interfaces comparable to Windows and MacOS.[5] Autopackage is not intended to provide installation of core applications and libraries for compatibility reasons. Using Autopackage to distribute non-core libraries is something of a thorny issue. On the one hand distributing them via Autopackage allows installation on a greater range of systems, on the other hand there can be conflicts with native package dependencies. Autopackage is intended as a complementary system to a distribution's usual packaging system, such as RPM and deb. Unlike these formats, Autopackage verifies dependencies by checking for the presence of deployed files, rather than querying a database of installed packages. This simplifies the design requirements for autopackage by relying on available resources, rather than necessitating tracking all the package choices of all targeted distributions.[6] Programs that use autopackage must also be relocatable, meaning they must be installable to varying directories with a single binary. This enables an autopackage to be installed by a non-root user in the user's home directory. Package format [edit] Autopackage packages are indicated by the .package extension. They are executable bash scripts, and can be installed by running them. Files in an Autopackage archive are not easily extracted by anything other than Autopackage itself as the internal format must be parsed in order to determine file layout and other issues.[7] Autopackage programs are installed to hard-coded system paths, which may conflict with existing packages installed by other means, thus leading to corruption. This can usually be remedied by uninstalling an older version of a package being installed with Autopackage. The Autopackage files can also be installed and removed using the Listaller toolset.[8] Listaller simply includes the Autopackage packages into its own package container format and handles Autopackage like any other Listaller package file. See also [edit] Free and open-source software portal AppImage Flatpak Listaller Package management system Bundle (software distribution) Linux package formats List of software package management systems References [edit]
8585
dbpedia
3
46
https://docs.aws.amazon.com/cdk/v2/guide/cdk_pipeline.html
en
Continuous integration and delivery (CI/CD) using CDK Pipelines
https://docs.aws.amazon.com/assets/images/favicon.ico
https://docs.aws.amazon.com/assets/images/favicon.ico
[ "https://d1ge0kk1l5kms0.cloudfront.net/images/G/01/webservices/console/warning.png" ]
[]
[]
[ "CDK", "AWS CDK", "AWS Cloud Development Kit", "IaC", "Infrastructure as code", "AWS", "AWS Cloud", "serverless", "modern applications" ]
null
[]
null
Use the CDK Pipelines module from the AWS Construct Library to configure continuous delivery of AWS CDK applications. When you commit your CDK app's source code into AWS CodeCommit, GitHub , or AWS CodeStar, CDK Pipelines can automatically build, test, and deploy your new version.
en
/assets/images/favicon.ico
https://docs.aws.amazon.com/cdk/v2/guide/cdk_pipeline.html
Use the CDK Pipelines module from the AWS Construct Library to configure continuous delivery of AWS CDK applications. When you commit your CDK app's source code into AWS CodeCommit, GitHub, or AWS CodeStar, CDK Pipelines can automatically build, test, and deploy your new version. CDK Pipelines are self-updating. If you add application stages or stacks, the pipeline automatically reconfigures itself to deploy those new stages or stacks. Bootstrap your AWS environments Before you can use CDK Pipelines, you must bootstrap the AWS environment that you will deploy your stacks to. A CDK Pipeline involves at least two environments. The first environment is where the pipeline is provisioned. The second environment is where you want to deploy the application's stacks or stages to (stages are groups of related stacks). These environments can be the same, but a best practice recommendation is to isolate stages from each other in different environments. Continuous deployment with CDK Pipelines requires the following to be included in the CDK Toolkit stack: The CDK Toolkit will upgrade your existing bootstrap stack or creates a new one if necessary. To bootstrap an environment that can provision an AWS CDK pipeline, invoke cdk bootstrap as shown in the following example. Invoking the AWS CDK Toolkit via the npx command temporarily installs it if necessary. It will also use the version of the Toolkit installed in the current project, if one exists. --cloudformation-execution-policies specifies the ARN of a policy under which future CDK Pipelines deployments will execute. The default AdministratorAccess policy makes sure that your pipeline can deploy every type of AWS resource. If you use this policy, make sure you trust all the code and dependencies that make up your AWS CDK app. Most organizations mandate stricter controls on what kinds of resources can be deployed by automation. Check with the appropriate department within your organization to determine the policy your pipeline should use. You can omit the --profile option if your default AWS profile contains the necessary authentication configuration and AWS Region. macOS/Linux npx cdk bootstrap aws://ACCOUNT-NUMBER/REGION --profile ADMIN-PROFILE \ --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess Windows npx cdk bootstrap aws://ACCOUNT-NUMBER/REGION --profile ADMIN-PROFILE ^ --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess To bootstrap additional environments into which AWS CDK applications will be deployed by the pipeline, use the following commands instead. The --trust option indicates which other account should have permissions to deploy AWS CDK applications into this environment. For this option, specify the pipeline's AWS account ID. Again, you can omit the --profile option if your default AWS profile contains the necessary authentication configuration and AWS Region. macOS/Linux npx cdk bootstrap aws://ACCOUNT-NUMBER/REGION --profile ADMIN-PROFILE \ --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \ --trust PIPELINE-ACCOUNT-NUMBER Windows npx cdk bootstrap aws://ACCOUNT-NUMBER/REGION --profile ADMIN-PROFILE ^ --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess ^ --trust PIPELINE-ACCOUNT-NUMBER If you are upgrading a legacy bootstrapped environment, the previous Amazon S3 bucket is orphaned when the new bucket is created. Delete it manually by using the Amazon S3 console. Protecting your bootstrap stack from deletion If a bootstrap stack is deleted, the AWS resources that were originally provisioned in the environment to support CDK deployments will also be deleted. This will cause the pipeline to stop working. If this happens, there is no general solution for recovery. After your environment is bootstrapped, do not delete and recreate the environment’s bootstrap stack. Instead, try to update the bootstrap stack to a new version by running the cdk bootstrap command again. To protect against accidental deletion of your bootstrap stack, we recommend that you provide the --termination-protection option with the cdk bootstrap command to enable termination protection. You can enable termination protection on new or existing bootstrap stacks. To learn more about this option, see --termination-protection. After enabling termination protection, you can use the AWS CLI or CloudFormation console to verify. Initialize a project Create a new, empty GitHub project and clone it to your workstation in the my-pipeline directory. (Our code examples in this topic use GitHub. You can also use AWS CodeStar or AWS CodeCommit.) git clone GITHUB-CLONE-URL my-pipeline cd my-pipeline After cloning, initialize the project as usual. TypeScript $ cdk init app --language typescript JavaScript $ cdk init app --language javascript Python $ cdk init app --language python After the app has been created, also enter the following two commands. These activate the app's Python virtual environment and install the AWS CDK core dependencies. $ source .venv/bin/activate # On Windows, run `.\venv\Scripts\activate` instead $ python -m pip install -r requirements.txt Java $ cdk init app --language java If you are using an IDE, you can now open or import the project. In Eclipse, for example, choose File > Import > Maven > Existing Maven Projects. Make sure that the project settings are set to use Java 8 (1.8). C# $ cdk init app --language csharp If you are using Visual Studio, open the solution file in the src directory. Go $ cdk init app --language go After the app has been created, also enter the following command to install the AWS Construct Library modules that the app requires. $ go get Define a pipeline Your CDK Pipelines application will include at least two stacks: one that represents the pipeline itself, and one or more stacks that represent the application deployed through it. Stacks can also be grouped into stages, which you can use to deploy copies of infrastructure stacks to different environments. For now, we'll consider the pipeline, and later delve into the application it will deploy. The construct CodePipeline is the construct that represents a CDK Pipeline that uses AWS CodePipeline as its deployment engine. When you instantiate CodePipeline in a stack, you define the source location for the pipeline (such as a GitHub repository). You also define the commands to build the app. For example, the following defines a pipeline whose source is stored in a GitHub repository. It also includes a build step for a TypeScript CDK application. Fill in the information about your GitHub repo where indicated. You'll also need to update the instantiation of the pipeline stack to specify the AWS account and Region. TypeScript In lib/my-pipeline-stack.ts (may vary if your project folder isn't named my-pipeline): import * as cdk from 'aws-cdk-lib'; import { Construct } from 'constructs'; import { CodePipeline, CodePipelineSource, ShellStep } from 'aws-cdk-lib/pipelines'; export class MyPipelineStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: new ShellStep('Synth', { input: CodePipelineSource.gitHub('OWNER/REPO', 'main'), commands: ['npm ci', 'npm run build', 'npx cdk synth'] }) }); } } In bin/my-pipeline.ts (may vary if your project folder isn't named my-pipeline): #!/usr/bin/env node import * as cdk from 'aws-cdk-lib'; import { MyPipelineStack } from '../lib/my-pipeline-stack'; const app = new cdk.App(); new MyPipelineStack(app, 'MyPipelineStack', { env: { account: '111111111111', region: 'eu-west-1', } }); app.synth(); JavaScript In lib/my-pipeline-stack.js (may vary if your project folder isn't named my-pipeline): const cdk = require('aws-cdk-lib'); const { CodePipeline, CodePipelineSource, ShellStep } = require('aws-cdk-lib/pipelines'); class MyPipelineStack extends cdk.Stack { constructor(scope, id, props) { super(scope, id, props); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: new ShellStep('Synth', { input: CodePipelineSource.gitHub('OWNER/REPO', 'main'), commands: ['npm ci', 'npm run build', 'npx cdk synth'] }) }); } } module.exports = { MyPipelineStack } In bin/my-pipeline.js (may vary if your project folder isn't named my-pipeline): #!/usr/bin/env node const cdk = require('aws-cdk-lib'); const { MyPipelineStack } = require('../lib/my-pipeline-stack'); const app = new cdk.App(); new MyPipelineStack(app, 'MyPipelineStack', { env: { account: '111111111111', region: 'eu-west-1', } }); app.synth(); Python In my-pipeline/my-pipeline-stack.py (may vary if your project folder isn't named my-pipeline): import aws_cdk as cdk from constructs import Construct from aws_cdk.pipelines import CodePipeline, CodePipelineSource, ShellStep class MyPipelineStack(cdk.Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) pipeline = CodePipeline(self, "Pipeline", pipeline_name="MyPipeline", synth=ShellStep("Synth", input=CodePipelineSource.git_hub("OWNER/REPO", "main"), commands=["npm install -g aws-cdk", "python -m pip install -r requirements.txt", "cdk synth"] ) ) In app.py: #!/usr/bin/env python3 import aws_cdk as cdk from my_pipeline.my_pipeline_stack import MyPipelineStack app = cdk.App() MyPipelineStack(app, "MyPipelineStack", env=cdk.Environment(account="111111111111", region="eu-west-1") ) app.synth() Java In src/main/java/com/myorg/MyPipelineStack.java (may vary if your project folder isn't named my-pipeline): package com.myorg; import java.util.Arrays; import software.constructs.Construct; import software.amazon.awscdk.Stack; import software.amazon.awscdk.StackProps; import software.amazon.awscdk.pipelines.CodePipeline; import software.amazon.awscdk.pipelines.CodePipelineSource; import software.amazon.awscdk.pipelines.ShellStep; public class MyPipelineStack extends Stack { public MyPipelineStack(final Construct scope, final String id) { this(scope, id, null); } public MyPipelineStack(final Construct scope, final String id, final StackProps props) { super(scope, id, props); CodePipeline pipeline = CodePipeline.Builder.create(this, "pipeline") .pipelineName("MyPipeline") .synth(ShellStep.Builder.create("Synth") .input(CodePipelineSource.gitHub("OWNER/REPO", "main")) .commands(Arrays.asList("npm install -g aws-cdk", "cdk synth")) .build()) .build(); } } In src/main/java/com/myorg/MyPipelineApp.java (may vary if your project folder isn't named my-pipeline): package com.myorg; import software.amazon.awscdk.App; import software.amazon.awscdk.Environment; import software.amazon.awscdk.StackProps; public class MyPipelineApp { public static void main(final String[] args) { App app = new App(); new MyPipelineStack(app, "PipelineStack", StackProps.builder() .env(Environment.builder() .account("111111111111") .region("eu-west-1") .build()) .build()); app.synth(); } } C# In src/MyPipeline/MyPipelineStack.cs (may vary if your project folder isn't named my-pipeline): using Amazon.CDK; using Amazon.CDK.Pipelines; namespace MyPipeline { public class MyPipelineStack : Stack { internal MyPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props) { var pipeline = new CodePipeline(this, "pipeline", new CodePipelineProps { PipelineName = "MyPipeline", Synth = new ShellStep("Synth", new ShellStepProps { Input = CodePipelineSource.GitHub("OWNER/REPO", "main"), Commands = new string[] { "npm install -g aws-cdk", "cdk synth" } }) }); } } } In src/MyPipeline/Program.cs (may vary if your project folder isn't named my-pipeline): using Amazon.CDK; namespace MyPipeline { sealed class Program { public static void Main(string[] args) { var app = new App(); new MyPipelineStack(app, "MyPipelineStack", new StackProps { Env = new Amazon.CDK.Environment { Account = "111111111111", Region = "eu-west-1" } }); app.Synth(); } } } Go package main import ( "github.com/aws/aws-cdk-go/awscdk/v2" codebuild "github.com/aws/aws-cdk-go/awscdk/v2/awscodebuild" ssm "github.com/aws/aws-cdk-go/awscdk/v2/awsssm" pipeline "github.com/aws/aws-cdk-go/awscdk/v2/pipelines" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" "os" ) // my CDK Stack with resources func NewCdkStack(scope constructs.Construct, id *string, props *awscdk.StackProps) awscdk.Stack { stack := awscdk.NewStack(scope, id, props) // create an example ssm parameter _ = ssm.NewStringParameter(stack, jsii.String("ssm-test-param"), &ssm.StringParameterProps{ ParameterName: jsii.String("/testparam"), Description: jsii.String("ssm parameter for demo"), StringValue: jsii.String("my test param"), }) return stack } // my CDK Application func NewCdkApplication(scope constructs.Construct, id *string, props *awscdk.StageProps) awscdk.Stage { stage := awscdk.NewStage(scope, id, props) _ = NewCdkStack(stage, jsii.String("cdk-stack"), &awscdk.StackProps{Env: props.Env}) return stage } // my CDK Pipeline func NewCdkPipeline(scope constructs.Construct, id *string, props *awscdk.StackProps) awscdk.Stack { stack := awscdk.NewStack(scope, id, props) // GitHub repo with owner and repository name githubRepo := pipeline.CodePipelineSource_GitHub(jsii.String("owner/repo"), jsii.String("main"), &pipeline.GitHubSourceOptions{ Authentication: awscdk.SecretValue_SecretsManager(jsii.String("my-github-token"), nil), }) // self mutating pipeline myPipeline := pipeline.NewCodePipeline(stack, jsii.String("cdkPipeline"), &pipeline.CodePipelineProps{ PipelineName: jsii.String("CdkPipeline"), // self mutation true - pipeline changes itself before application deployment SelfMutation: jsii.Bool(true), CodeBuildDefaults: &pipeline.CodeBuildOptions{ BuildEnvironment: &codebuild.BuildEnvironment{ // image version 6.0 recommended for newer go version BuildImage: codebuild.LinuxBuildImage_FromCodeBuildImageId(jsii.String("aws/codebuild/standard:6.0")), }, }, Synth: pipeline.NewCodeBuildStep(jsii.String("Synth"), &pipeline.CodeBuildStepProps{ Input: githubRepo, Commands: &[]*string{ jsii.String("npm install -g aws-cdk"), jsii.String("cdk synth"), }, }), }) // deployment of actual CDK application myPipeline.AddStage(NewCdkApplication(stack, jsii.String("MyApplication"), &awscdk.StageProps{ Env: targetAccountEnv(), }), &pipeline.AddStageOpts{ Post: &[]pipeline.Step{ pipeline.NewCodeBuildStep(jsii.String("Manual Steps"), &pipeline.CodeBuildStepProps{ Commands: &[]*string{ jsii.String("echo \"My CDK App deployed, manual steps go here ... \""), }, }), }, }) return stack } // main app func main() { defer jsii.Close() app := awscdk.NewApp(nil) // call CDK Pipeline NewCdkPipeline(app, jsii.String("CdkPipelineStack"), &awscdk.StackProps{ Env: pipelineEnv(), }) app.Synth(nil) } // env determines the AWS environment (account+region) in which our stack is to // be deployed. For more information see: https://docs.aws.amazon.com/cdk/latest/guide/environments.html func pipelineEnv() *awscdk.Environment { return &awscdk.Environment{ Account: jsii.String(os.Getenv("CDK_DEFAULT_ACCOUNT")), Region: jsii.String(os.Getenv("CDK_DEFAULT_REGION")), } } func targetAccountEnv() *awscdk.Environment { return &awscdk.Environment{ Account: jsii.String(os.Getenv("CDK_DEFAULT_ACCOUNT")), Region: jsii.String(os.Getenv("CDK_DEFAULT_REGION")), } } You must deploy a pipeline manually once. After that, the pipeline keeps itself up to date from the source code repository. So be sure that the code in the repo is the code you want deployed. Check in your changes and push to GitHub, then deploy: git add --all git commit -m "initial commit" git push cdk deploy Application stages To define a multi-stack AWS application that can be added to the pipeline all at once, define a subclass of Stage. (This is different from CdkStage in the CDK Pipelines module.) The stage contains the stacks that make up your application. If there are dependencies between the stacks, the stacks are automatically added to the pipeline in the right order. Stacks that don't depend on each other are deployed in parallel. You can add a dependency relationship between stacks by calling stack1.addDependency(stack2). Stages accept a default env argument, which becomes the default environment for the stacks inside it. (Stacks can still have their own environment specified.). An application is added to the pipeline by calling addStage() with instances of Stage. A stage can be instantiated and added to the pipeline multiple times to define different stages of your DTAP or multi-Region application pipeline. We will create a stack containing a simple Lambda function and place that stack in a stage. Then we will add the stage to the pipeline so it can be deployed. TypeScript Create the new file lib/my-pipeline-lambda-stack.ts to hold our application stack containing a Lambda function. import * as cdk from 'aws-cdk-lib'; import { Construct } from 'constructs'; import { Function, InlineCode, Runtime } from 'aws-cdk-lib/aws-lambda'; export class MyLambdaStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); new Function(this, 'LambdaFunction', { runtime: Runtime.NODEJS_18_X, handler: 'index.handler', code: new InlineCode('exports.handler = _ => "Hello, CDK";') }); } } Create the new file lib/my-pipeline-app-stage.ts to hold our stage. import * as cdk from 'aws-cdk-lib'; import { Construct } from "constructs"; import { MyLambdaStack } from './my-pipeline-lambda-stack'; export class MyPipelineAppStage extends cdk.Stage { constructor(scope: Construct, id: string, props?: cdk.StageProps) { super(scope, id, props); const lambdaStack = new MyLambdaStack(this, 'LambdaStack'); } } Edit lib/my-pipeline-stack.ts to add the stage to our pipeline. import * as cdk from 'aws-cdk-lib'; import { Construct } from 'constructs'; import { CodePipeline, CodePipelineSource, ShellStep } from 'aws-cdk-lib/pipelines'; import { MyPipelineAppStage } from './my-pipeline-app-stage'; export class MyPipelineStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: new ShellStep('Synth', { input: CodePipelineSource.gitHub('OWNER/REPO', 'main'), commands: ['npm ci', 'npm run build', 'npx cdk synth'] }) }); pipeline.addStage(new MyPipelineAppStage(this, "test", { env: { account: "111111111111", region: "eu-west-1" } })); } } JavaScript Create the new file lib/my-pipeline-lambda-stack.js to hold our application stack containing a Lambda function. const cdk = require('aws-cdk-lib'); const { Function, InlineCode, Runtime } = require('aws-cdk-lib/aws-lambda'); class MyLambdaStack extends cdk.Stack { constructor(scope, id, props) { super(scope, id, props); new Function(this, 'LambdaFunction', { runtime: Runtime.NODEJS_18_X, handler: 'index.handler', code: new InlineCode('exports.handler = _ => "Hello, CDK";') }); } } module.exports = { MyLambdaStack } Create the new file lib/my-pipeline-app-stage.js to hold our stage. const cdk = require('aws-cdk-lib'); const { MyLambdaStack } = require('./my-pipeline-lambda-stack'); class MyPipelineAppStage extends cdk.Stage { constructor(scope, id, props) { super(scope, id, props); const lambdaStack = new MyLambdaStack(this, 'LambdaStack'); } } module.exports = { MyPipelineAppStage }; Edit lib/my-pipeline-stack.ts to add the stage to our pipeline. const cdk = require('aws-cdk-lib'); const { CodePipeline, CodePipelineSource, ShellStep } = require('aws-cdk-lib/pipelines'); const { MyPipelineAppStage } = require('./my-pipeline-app-stage'); class MyPipelineStack extends cdk.Stack { constructor(scope, id, props) { super(scope, id, props); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: new ShellStep('Synth', { input: CodePipelineSource.gitHub('OWNER/REPO', 'main'), commands: ['npm ci', 'npm run build', 'npx cdk synth'] }) }); pipeline.addStage(new MyPipelineAppStage(this, "test", { env: { account: "111111111111", region: "eu-west-1" } })); } } module.exports = { MyPipelineStack } Python Create the new file my_pipeline/my_pipeline_lambda_stack.py to hold our application stack containing a Lambda function. import aws_cdk as cdk from constructs import Construct from aws_cdk.aws_lambda import Function, InlineCode, Runtime class MyLambdaStack(cdk.Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) Function(self, "LambdaFunction", runtime=Runtime.NODEJS_18_X, handler="index.handler", code=InlineCode("exports.handler = _ => 'Hello, CDK';") ) Create the new file my_pipeline/my_pipeline_app_stage.py to hold our stage. import aws_cdk as cdk from constructs import Construct from my_pipeline.my_pipeline_lambda_stack import MyLambdaStack class MyPipelineAppStage(cdk.Stage): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) lambdaStack = MyLambdaStack(self, "LambdaStack") Edit my_pipeline/my-pipeline-stack.py to add the stage to our pipeline. import aws_cdk as cdk from constructs import Construct from aws_cdk.pipelines import CodePipeline, CodePipelineSource, ShellStep from my_pipeline.my_pipeline_app_stage import MyPipelineAppStage class MyPipelineStack(cdk.Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) pipeline = CodePipeline(self, "Pipeline", pipeline_name="MyPipeline", synth=ShellStep("Synth", input=CodePipelineSource.git_hub("OWNER/REPO", "main"), commands=["npm install -g aws-cdk", "python -m pip install -r requirements.txt", "cdk synth"])) pipeline.add_stage(MyPipelineAppStage(self, "test", env=cdk.Environment(account="111111111111", region="eu-west-1"))) Java Create the new file src/main/java/com.myorg/MyPipelineLambdaStack.java to hold our application stack containing a Lambda function. package com.myorg; import software.constructs.Construct; import software.amazon.awscdk.Stack; import software.amazon.awscdk.StackProps; import software.amazon.awscdk.services.lambda.Function; import software.amazon.awscdk.services.lambda.Runtime; import software.amazon.awscdk.services.lambda.InlineCode; public class MyPipelineLambdaStack extends Stack { public MyPipelineLambdaStack(final Construct scope, final String id) { this(scope, id, null); } public MyPipelineLambdaStack(final Construct scope, final String id, final StackProps props) { super(scope, id, props); Function.Builder.create(this, "LambdaFunction") .runtime(Runtime.NODEJS_18_X) .handler("index.handler") .code(new InlineCode("exports.handler = _ => 'Hello, CDK';")) .build(); } } Create the new file src/main/java/com.myorg/MyPipelineAppStage.java to hold our stage. package com.myorg; import software.constructs.Construct; import software.amazon.awscdk.Stack; import software.amazon.awscdk.Stage; import software.amazon.awscdk.StageProps; public class MyPipelineAppStage extends Stage { public MyPipelineAppStage(final Construct scope, final String id) { this(scope, id, null); } public MyPipelineAppStage(final Construct scope, final String id, final StageProps props) { super(scope, id, props); Stack lambdaStack = new MyPipelineLambdaStack(this, "LambdaStack"); } } Edit src/main/java/com.myorg/MyPipelineStack.java to add the stage to our pipeline. package com.myorg; import java.util.Arrays; import software.constructs.Construct; import software.amazon.awscdk.Environment; import software.amazon.awscdk.Stack; import software.amazon.awscdk.StackProps; import software.amazon.awscdk.StageProps; import software.amazon.awscdk.pipelines.CodePipeline; import software.amazon.awscdk.pipelines.CodePipelineSource; import software.amazon.awscdk.pipelines.ShellStep; public class MyPipelineStack extends Stack { public MyPipelineStack(final Construct scope, final String id) { this(scope, id, null); } public MyPipelineStack(final Construct scope, final String id, final StackProps props) { super(scope, id, props); final CodePipeline pipeline = CodePipeline.Builder.create(this, "pipeline") .pipelineName("MyPipeline") .synth(ShellStep.Builder.create("Synth") .input(CodePipelineSource.gitHub("OWNER/REPO", "main")) .commands(Arrays.asList("npm install -g aws-cdk", "cdk synth")) .build()) .build(); pipeline.addStage(new MyPipelineAppStage(this, "test", StageProps.builder() .env(Environment.builder() .account("111111111111") .region("eu-west-1") .build()) .build())); } } C# Create the new file src/MyPipeline/MyPipelineLambdaStack.cs to hold our application stack containing a Lambda function. using Amazon.CDK; using Constructs; using Amazon.CDK.AWS.Lambda; namespace MyPipeline { class MyPipelineLambdaStack : Stack { public MyPipelineLambdaStack(Construct scope, string id, StackProps props=null) : base(scope, id, props) { new Function(this, "LambdaFunction", new FunctionProps { Runtime = Runtime.NODEJS_18_X, Handler = "index.handler", Code = new InlineCode("exports.handler = _ => 'Hello, CDK';") }); } } } Create the new file src/MyPipeline/MyPipelineAppStage.cs to hold our stage. using Amazon.CDK; using Constructs; namespace MyPipeline { class MyPipelineAppStage : Stage { public MyPipelineAppStage(Construct scope, string id, StageProps props=null) : base(scope, id, props) { Stack lambdaStack = new MyPipelineLambdaStack(this, "LambdaStack"); } } } Edit src/MyPipeline/MyPipelineStack.cs to add the stage to our pipeline. using Amazon.CDK; using Constructs; using Amazon.CDK.Pipelines; namespace MyPipeline { public class MyPipelineStack : Stack { internal MyPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props) { var pipeline = new CodePipeline(this, "pipeline", new CodePipelineProps { PipelineName = "MyPipeline", Synth = new ShellStep("Synth", new ShellStepProps { Input = CodePipelineSource.GitHub("OWNER/REPO", "main"), Commands = new string[] { "npm install -g aws-cdk", "cdk synth" } }) }); pipeline.AddStage(new MyPipelineAppStage(this, "test", new StageProps { Env = new Environment { Account = "111111111111", Region = "eu-west-1" } })); } } } Every application stage added by addStage() results in the addition of a corresponding pipeline stage, represented by a StageDeployment instance returned by the addStage() call. You can add pre-deployment or post-deployment actions to the stage by calling its addPre() or addPost() method. TypeScript // import { ManualApprovalStep } from 'aws-cdk-lib/pipelines'; const testingStage = pipeline.addStage(new MyPipelineAppStage(this, 'testing', { env: { account: '111111111111', region: 'eu-west-1' } })); testingStage.addPost(new ManualApprovalStep('approval')); JavaScript // const { ManualApprovalStep } = require('aws-cdk-lib/pipelines'); const testingStage = pipeline.addStage(new MyPipelineAppStage(this, 'testing', { env: { account: '111111111111', region: 'eu-west-1' } })); testingStage.addPost(new ManualApprovalStep('approval')); Python # from aws_cdk.pipelines import ManualApprovalStep testing_stage = pipeline.add_stage(MyPipelineAppStage(self, "testing", env=cdk.Environment(account="111111111111", region="eu-west-1"))) testing_stage.add_post(ManualApprovalStep('approval')) Java // import software.amazon.awscdk.pipelines.StageDeployment; // import software.amazon.awscdk.pipelines.ManualApprovalStep; StageDeployment testingStage = pipeline.addStage(new MyPipelineAppStage(this, "test", StageProps.builder() .env(Environment.builder() .account("111111111111") .region("eu-west-1") .build()) .build())); testingStage.addPost(new ManualApprovalStep("approval")); C# var testingStage = pipeline.AddStage(new MyPipelineAppStage(this, "test", new StageProps { Env = new Environment { Account = "111111111111", Region = "eu-west-1" } })); testingStage.AddPost(new ManualApprovalStep("approval")); You can add stages to a Wave to deploy them in parallel, for example when deploying a stage to multiple accounts or Regions. TypeScript const wave = pipeline.addWave('wave'); wave.addStage(new MyApplicationStage(this, 'MyAppEU', { env: { account: '111111111111', region: 'eu-west-1' } })); wave.addStage(new MyApplicationStage(this, 'MyAppUS', { env: { account: '111111111111', region: 'us-west-1' } })); JavaScript const wave = pipeline.addWave('wave'); wave.addStage(new MyApplicationStage(this, 'MyAppEU', { env: { account: '111111111111', region: 'eu-west-1' } })); wave.addStage(new MyApplicationStage(this, 'MyAppUS', { env: { account: '111111111111', region: 'us-west-1' } })); Python wave = pipeline.add_wave("wave") wave.add_stage(MyApplicationStage(self, "MyAppEU", env=cdk.Environment(account="111111111111", region="eu-west-1"))) wave.add_stage(MyApplicationStage(self, "MyAppUS", env=cdk.Environment(account="111111111111", region="us-west-1"))) Java // import software.amazon.awscdk.pipelines.Wave; final Wave wave = pipeline.addWave("wave"); wave.addStage(new MyPipelineAppStage(this, "MyAppEU", StageProps.builder() .env(Environment.builder() .account("111111111111") .region("eu-west-1") .build()) .build())); wave.addStage(new MyPipelineAppStage(this, "MyAppUS", StageProps.builder() .env(Environment.builder() .account("111111111111") .region("us-west-1") .build()) .build())); C# var wave = pipeline.AddWave("wave"); wave.AddStage(new MyPipelineAppStage(this, "MyAppEU", new StageProps { Env = new Environment { Account = "111111111111", Region = "eu-west-1" } })); wave.AddStage(new MyPipelineAppStage(this, "MyAppUS", new StageProps { Env = new Environment { Account = "111111111111", Region = "us-west-1" } })); Testing deployments You can add steps to a CDK Pipeline to validate the deployments that you're performing. For example, you can use the CDK Pipeline library's ShellStep to perform tasks such as the following: In its simplest form, adding validation actions looks like this: TypeScript // stage was returned by pipeline.addStage stage.addPost(new ShellStep("validate", { commands: ['../tests/validate.sh'], })); JavaScript // stage was returned by pipeline.addStage stage.addPost(new ShellStep("validate", { commands: ['../tests/validate.sh'], })); Python # stage was returned by pipeline.add_stage stage.add_post(ShellStep("validate", commands=[''../tests/validate.sh''] )) Java // stage was returned by pipeline.addStage stage.addPost(ShellStep.Builder.create("validate") .commands(Arrays.asList("'../tests/validate.sh'")) .build()); C# // stage was returned by pipeline.addStage stage.AddPost(new ShellStep("validate", new ShellStepProps { Commands = new string[] { "'../tests/validate.sh'" } })); Many AWS CloudFormation deployments result in the generation of resources with unpredictable names. Because of this, CDK Pipelines provide a way to read AWS CloudFormation outputs after a deployment. This makes it possible to pass (for example) the generated URL of a load balancer to a test action. To use outputs, expose the CfnOutput object you're interested in. Then, pass it in a step's envFromCfnOutputs property to make it available as an environment variable within that step. TypeScript // given a stack lbStack that exposes a load balancer construct as loadBalancer this.loadBalancerAddress = new cdk.CfnOutput(lbStack, 'LbAddress', { value: `https://${lbStack.loadBalancer.loadBalancerDnsName}/` }); // pass the load balancer address to a shell step stage.addPost(new ShellStep("lbaddr", { envFromCfnOutputs: {lb_addr: lbStack.loadBalancerAddress}, commands: ['echo $lb_addr'] })); JavaScript // given a stack lbStack that exposes a load balancer construct as loadBalancer this.loadBalancerAddress = new cdk.CfnOutput(lbStack, 'LbAddress', { value: `https://${lbStack.loadBalancer.loadBalancerDnsName}/` }); // pass the load balancer address to a shell step stage.addPost(new ShellStep("lbaddr", { envFromCfnOutputs: {lb_addr: lbStack.loadBalancerAddress}, commands: ['echo $lb_addr'] })); Python # given a stack lb_stack that exposes a load balancer construct as load_balancer self.load_balancer_address = cdk.CfnOutput(lb_stack, "LbAddress", value=f"https://{lb_stack.load_balancer.load_balancer_dns_name}/") # pass the load balancer address to a shell step stage.add_post(ShellStep("lbaddr", env_from_cfn_outputs={"lb_addr": lb_stack.load_balancer_address} commands=["echo $lb_addr"])) Java // given a stack lbStack that exposes a load balancer construct as loadBalancer loadBalancerAddress = CfnOutput.Builder.create(lbStack, "LbAddress") .value(String.format("https://%s/", lbStack.loadBalancer.loadBalancerDnsName)) .build(); stage.addPost(ShellStep.Builder.create("lbaddr") .envFromCfnOutputs( // Map.of requires Java 9 or later java.util.Map.of("lbAddr", loadBalancerAddress)) .commands(Arrays.asList("echo $lbAddr")) .build()); C# // given a stack lbStack that exposes a load balancer construct as loadBalancer loadBalancerAddress = new CfnOutput(lbStack, "LbAddress", new CfnOutputProps { Value = string.Format("https://{0}/", lbStack.loadBalancer.LoadBalancerDnsName) }); stage.AddPost(new ShellStep("lbaddr", new ShellStepProps { EnvFromCfnOutputs = new Dictionary<string, CfnOutput> { { "lbAddr", loadBalancerAddress } }, Commands = new string[] { "echo $lbAddr" } })); You can write simple validation tests right in the ShellStep, but this approach becomes unwieldy when the test is more than a few lines. For more complex tests, you can bring additional files (such as complete shell scripts, or programs in other languages) into the ShellStep via the inputs property. The inputs can be any step that has an output, including a source (such as a GitHub repo) or another ShellStep. Bringing in files from the source repository is appropriate if the files are directly usable in the test (for example, if they are themselves executable). In this example, we declare our GitHub repo as source (rather than instantiating it inline as part of the CodePipeline). Then, we pass this fileset to both the pipeline and the validation test. TypeScript const source = CodePipelineSource.gitHub('OWNER/REPO', 'main'); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: new ShellStep('Synth', { input: source, commands: ['npm ci', 'npm run build', 'npx cdk synth'] }) }); const stage = pipeline.addStage(new MyPipelineAppStage(this, 'test', { env: { account: '111111111111', region: 'eu-west-1' } })); stage.addPost(new ShellStep('validate', { input: source, commands: ['sh ../tests/validate.sh'] })); JavaScript const source = CodePipelineSource.gitHub('OWNER/REPO', 'main'); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: new ShellStep('Synth', { input: source, commands: ['npm ci', 'npm run build', 'npx cdk synth'] }) }); const stage = pipeline.addStage(new MyPipelineAppStage(this, 'test', { env: { account: '111111111111', region: 'eu-west-1' } })); stage.addPost(new ShellStep('validate', { input: source, commands: ['sh ../tests/validate.sh'] })); Python source = CodePipelineSource.git_hub("OWNER/REPO", "main") pipeline = CodePipeline(self, "Pipeline", pipeline_name="MyPipeline", synth=ShellStep("Synth", input=source, commands=["npm install -g aws-cdk", "python -m pip install -r requirements.txt", "cdk synth"])) stage = pipeline.add_stage(MyApplicationStage(self, "test", env=cdk.Environment(account="111111111111", region="eu-west-1"))) stage.add_post(ShellStep("validate", input=source, commands=["sh ../tests/validate.sh"], )) Java final CodePipelineSource source = CodePipelineSource.gitHub("OWNER/REPO", "main"); final CodePipeline pipeline = CodePipeline.Builder.create(this, "pipeline") .pipelineName("MyPipeline") .synth(ShellStep.Builder.create("Synth") .input(source) .commands(Arrays.asList("npm install -g aws-cdk", "cdk synth")) .build()) .build(); final StageDeployment stage = pipeline.addStage(new MyPipelineAppStage(this, "test", StageProps.builder() .env(Environment.builder() .account("111111111111") .region("eu-west-1") .build()) .build())); stage.addPost(ShellStep.Builder.create("validate") .input(source) .commands(Arrays.asList("sh ../tests/validate.sh")) .build()); C# var source = CodePipelineSource.GitHub("OWNER/REPO", "main"); var pipeline = new CodePipeline(this, "pipeline", new CodePipelineProps { PipelineName = "MyPipeline", Synth = new ShellStep("Synth", new ShellStepProps { Input = source, Commands = new string[] { "npm install -g aws-cdk", "cdk synth" } }) }); var stage = pipeline.AddStage(new MyPipelineAppStage(this, "test", new StageProps { Env = new Environment { Account = "111111111111", Region = "eu-west-1" } })); stage.AddPost(new ShellStep("validate", new ShellStepProps { Input = source, Commands = new string[] { "sh ../tests/validate.sh" } })); Getting the additional files from the synth step is appropriate if your tests need to be compiled, which is done as part of synthesis. TypeScript const synthStep = new ShellStep('Synth', { input: CodePipelineSource.gitHub('OWNER/REPO', 'main'), commands: ['npm ci', 'npm run build', 'npx cdk synth'], }); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: synthStep }); const stage = pipeline.addStage(new MyPipelineAppStage(this, 'test', { env: { account: '111111111111', region: 'eu-west-1' } })); // run a script that was transpiled from TypeScript during synthesis stage.addPost(new ShellStep('validate', { input: synthStep, commands: ['node tests/validate.js'] })); JavaScript const synthStep = new ShellStep('Synth', { input: CodePipelineSource.gitHub('OWNER/REPO', 'main'), commands: ['npm ci', 'npm run build', 'npx cdk synth'], }); const pipeline = new CodePipeline(this, 'Pipeline', { pipelineName: 'MyPipeline', synth: synthStep }); const stage = pipeline.addStage(new MyPipelineAppStage(this, "test", { env: { account: "111111111111", region: "eu-west-1" } })); // run a script that was transpiled from TypeScript during synthesis stage.addPost(new ShellStep('validate', { input: synthStep, commands: ['node tests/validate.js'] })); Python synth_step = ShellStep("Synth", input=CodePipelineSource.git_hub("OWNER/REPO", "main"), commands=["npm install -g aws-cdk", "python -m pip install -r requirements.txt", "cdk synth"]) pipeline = CodePipeline(self, "Pipeline", pipeline_name="MyPipeline", synth=synth_step) stage = pipeline.add_stage(MyApplicationStage(self, "test", env=cdk.Environment(account="111111111111", region="eu-west-1"))) # run a script that was compiled during synthesis stage.add_post(ShellStep("validate", input=synth_step, commands=["node test/validate.js"], )) Java final ShellStep synth = ShellStep.Builder.create("Synth") .input(CodePipelineSource.gitHub("OWNER/REPO", "main")) .commands(Arrays.asList("npm install -g aws-cdk", "cdk synth")) .build(); final CodePipeline pipeline = CodePipeline.Builder.create(this, "pipeline") .pipelineName("MyPipeline") .synth(synth) .build(); final StageDeployment stage = pipeline.addStage(new MyPipelineAppStage(this, "test", StageProps.builder() .env(Environment.builder() .account("111111111111") .region("eu-west-1") .build()) .build())); stage.addPost(ShellStep.Builder.create("validate") .input(synth) .commands(Arrays.asList("node ./tests/validate.js")) .build()); C# var synth = new ShellStep("Synth", new ShellStepProps { Input = CodePipelineSource.GitHub("OWNER/REPO", "main"), Commands = new string[] { "npm install -g aws-cdk", "cdk synth" } }); var pipeline = new CodePipeline(this, "pipeline", new CodePipelineProps { PipelineName = "MyPipeline", Synth = synth }); var stage = pipeline.AddStage(new MyPipelineAppStage(this, "test", new StageProps { Env = new Environment { Account = "111111111111", Region = "eu-west-1" } })); stage.AddPost(new ShellStep("validate", new ShellStepProps { Input = synth, Commands = new string[] { "node ./tests/validate.js" } })); Security notes Any form of continuous delivery has inherent security risks. Under the AWS Shared Responsibility Model, you are responsible for the security of your information in the AWS Cloud. The CDK Pipelines library gives you a head start by incorporating secure defaults and modeling best practices. However, by its very nature, a library that needs a high level of access to fulfill its intended purpose cannot assure complete security. There are many attack vectors outside of AWS and your organization. In particular, keep in mind the following: Troubleshooting The following issues are commonly encountered while getting started with CDK Pipelines.
8585
dbpedia
2
92
https://melpa.org/
en
MELPA
[]
[]
[]
[ "" ]
null
[]
null
The largest and most up-to-date repository of Emacs packages.
en
favicon.ico
null
8585
dbpedia
3
5
https://en.wikipedia.org/wiki/Listaller
en
Listaller
https://upload.wikimedia…staller-Logo.png
https://upload.wikimedia…staller-Logo.png
[ "https://en.wikipedia.org/static/images/icons/wikipedia.png", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg", "https://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png", "https://upload.wikimedia.org/wikipedia/commons/b/b1/Listaller-Logo.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/28px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/NewTux.svg/13px-NewTux.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/16px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/d/db/Symbol_list_class.svg/16px-Symbol_list_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/16px-Symbol_category_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/9c/Symbol_file_class.svg/16px-Symbol_file_class.svg.png", "https://login.wikimedia.org/wiki/Special:CentralAutoLogin/start?type=1x1", "https://en.wikipedia.org/static/images/footer/wikimedia-button.svg", "https://en.wikipedia.org/static/images/footer/poweredby_mediawiki.svg" ]
[]
[]
[ "" ]
null
[ "Contributors to Wikimedia projects" ]
2011-03-16T17:26:17+00:00
en
/static/apple-touch/wikipedia.png
https://en.wikipedia.org/wiki/Listaller
Linux package management system ListallerDeveloper(s)Matthias KlumppInitial releaseDecember 2007; 16 years ago ( )Stable release 0.5.9[1] / 8 September 2014; 9 years ago ( ) RepositoryWritten inVala, C/C++Operating systemLinuxTypePackage management systemLicenseGNU Lesser General Public License, GNU General Public LicenseWebsitelistaller .tenstral .net Listaller is a free computer software installation system (similar to a package management system) aimed at making it simple to create a package that can be installed on all Linux distributions as well as providing tools and API to make software management on Linux more user-friendly. History [edit] Listaller was started in December 2007 by freedesktop.org developer Matthias Klumpp as an experimental project to explore the possibility of writing a universal user interface to manage all kinds of Linux software, no matter how it was installed. Therefore, Listaller had backends to manage Autopackage, LOKI, Mojo and native distribution packages. The original project provided one user interface to manage all kinds of installed software. Interaction with the native distribution package management was done via an own abstraction layer, which was later replaced by PackageKit.[2] Listaller also provided a cross-distribution software installation format which should have made it easier to create packages which run on multiple distributions. The installer part of Listaller was also able to assist in installing Autopackage packages. The very first versions were written in Object Pascal. Although the project started as an experiment, it soon evolved to a competitor for Autopackage and Mojo. Until 2011, Listaller never made any stable release. With the announcement of AppStream a lot of the original Listaller goals would be achieved, so the author decided to change Listaller away from a full software manager to a software installer only and joined forces with the AppStream project. Because Pascal was not considered as ideal language to collaborate with other projects and the project already had spent much time in developing Pascal bindings to third-party libraries, Listaller was rewritten in Vala with a subset of the original features and the main goal to provide seamless integration with AppStream and PackageKit. Therefore the universal software manager part was removed and the project now focuses on creating a cross-distro format for distribution of binary Linux software. As of Nov 10, 2014 Matthias announced the Limba project.[3] Limba[4] is supposed to be Listaller's next version. Methodology [edit] Listaller is intended to be used for installing binary, or pre-compiled, versions of non-core applications such as word processors, web browsers, and personal computer games, rather than core libraries and applications such as operating system shells. Listaller is not intended to provide support for installing system libraries for security reasons. Listaller is using an own package format, so-called IPK packages (short for Installation package), which are LZMA-compressed signed tarballs. IPK packages contain only small configuration files to modify the setup process. They do not provide their own logic or scripts which are run during install time. All parts of a setup process are handled by Listaller's built-in routines, which make it possible for distributors to modify the setup process of 3rd-party applications to comply to their own policy, if necessary. The key value of Listaller is integration into desktop environments, existing package management tools and distributions. Therefore, the project provides several integration components by default, which make it possible to manage Listaller-installed 3rd-party applications from any software manager which supports PackageKit and/or AppStream. At time it is discussed to run all installed 3rd-party tools in a Sandbox by default.[5] Programs that use Listaller must also be relocatable, meaning they must be installable to varying directories with a single binary. This makes it possible for Listaller to install software for non-root users into their home directory, although this method has to be enabled explicitly and its use is not encouraged. The Listaller Developer Tools provide tools and documentation for application developers to make their software relocatable. Listaller and Autopackage [edit] In August 2010 both projects announced they will merge.[6] As consequence of the merge, Autopackage abandoned its own package binary package format and all user interfaces to install Autopackage packages. Autopackage tools like BinReloc to create relocatable applications or APBuild are now developed as part of the Listaller project. Reason for merging Autopackage into Listaller was mainly a great lack of developers in both projects so they decided to join forces. Integration [edit] KDE provides support for Listaller through Apper, although distributors need to explicitly enable it using a compile-time switch. Support for GNOME is currently being developed as part of the GNOME-PackageKit suite. In theory, any distribution which can run PackageKit >= 0.8.6 should be able to provide Listaller support too. Ubuntu announced that they will not use Listaller as 3rd-party installer for their Ubuntu Phone, but develop an own, Ubuntu-specific solution instead.[7] See also [edit] Free and open-source software portal AppStream References [edit]
8585
dbpedia
1
69
https://almenscorner.io/tag/python/
en
almen's Intune corner
https://almenscorner.io/…om-457@0@f-1.jpg
https://almenscorner.io/…om-457@0@f-1.jpg
[ "https://almenscorner.io/content/images/size/w300/2023/08/almenscorner-1.png", "https://almenscorner.io/assets/images/tag-bg.svg?v=41ed630341", "https://almenscorner.io/content/images/size/w600/2024/03/_59028bba-9bae-49d3-ac9c-91d39df16767.jpeg", "https://almenscorner.io/content/images/size/w600/2022/09/mmglogo.png", "https://almenscorner.io/content/images/size/w600/2022/01/logicapp.png", "https://almenscorner.io/content/images/size/w600/2021/12/Screenshot-2021-12-14-at-16.00.40.png", "https://almenscorner.io/content/images/size/w600/2021/11/Screenshot-2021-11-17-at-21.26.19.png", "https://almenscorner.io/content/images/size/w600/2021/10/autopkg-1.png", "https://almenscorner.io/content/images/size/w600/2021/09/Screenshot-2021-09-24-at-15.18.44.png" ]
[]
[]
[ "" ]
null
[]
2024-04-02T00:00:00
en
https://almenscorner.io/…almenscorner.png
almen's Intune corner
https://almenscorner.io/tag/python/
8585
dbpedia
0
25
https://www.kali.org/docs/development/intro-to-packaging-example/
en
Introduction to packaging step-by-step example
https://www.kali.org/ima…es/kali-logo.svg
https://www.kali.org/ima…es/kali-logo.svg
[ "https://www.kali.org/docs/development/intro-to-packaging-example/instaloader-00.png", "https://www.kali.org/docs/development/intro-to-packaging-example/instaloader-01.png", "https://www.kali.org/docs/development/intro-to-packaging-example/instaloader-02.png" ]
[]
[]
[ "kali", "linux", "kalilinux", "Penetration", "Testing", "Penetration Testing", "Distribution", "Advanced" ]
null
[]
2023-10-31T00:00:00+00:00
Instaloader Instaloader is a Python 3 application with a single dependency (Python’s requests). This makes it a relatively simple package, however not as straightforward as only packaging up a shell script would be. Because of the learning opportunities and simplicity, this makes it a good introduction package. Instaloader Code Overview The first thing we do is look at the application’s GitHub page.
en
https://www.kali.org/images/favicon.png
Kali Linux
https://www.kali.org/docs/development/intro-to-packaging-example/
Instaloader Instaloader is a Python 3 application with a single dependency (Python’s requests). This makes it a relatively simple package, however not as straightforward as only packaging up a shell script would be. Because of the learning opportunities and simplicity, this makes it a good introduction package. Instaloader Code Overview The first thing we do is look at the application’s GitHub page. A few things stand out which we take a note of: What we notice here is some information that will come in handy later: The tool contains a setup.py script It has a release The license is MIT based We’ll be digging into each of these more later, for now it is just information to know. Setting Up The Environment We will assume that we have already followed our documentation on setting up a packing environment. Let’s set up our directories now for this package: kali@kali:~$ mkdir -p ~/kali/packages/instaloader/ ~/kali/upstream/ kali@kali:~$ Everything that relates to us building a package will be using ~/kali/. In there will be two sub folders: packages/ will be a source code of the package we are going to create upstream/ will be a compressed file of the source code of the application (ideally from a tag version release which we saw before) Downloading Tag Releases Because we are making a new package from scratch, we’ll manually download the version of the tool we want to package up. If we were updating a package (and it was packaged correctly), there is a process to help speed it up. However, this will be covered in another guide. Going to the GitHub’s release page, we can see the latest version (which at the time of writing is 4.4.4). Here is the option to download instaloader-v4.4.4-windows-standalone.zip, as well as Source Code (zip), and Source Code (tar.gz). We are interested in the tar.gz option. We will use wget and make sure to format its name appropriately according to Debian’s standards for source packages (take note of .orig.tar.gz): kali@kali:~$ wget https://github.com/instaloader/instaloader/archive/refs/tags/v4.4.4.tar.gz -O ~/kali/upstream/instaloader_4.4.4.orig.tar.gz kali@kali:~$ If there isn’t a tag release for the software (or it hasn’t had an release in some time), we can use the latest git commit. This is covered in another guide. However, it is preferred to use a tag release when available. Creating Package Source Code We need to switch paths to the working location of the package: kali@kali:~$ cd ~/kali/packages/instaloader/ kali@kali:~/kali/packages/instaloader$ We are now going to create a new blank git repository: kali@kali:~/kali/packages/instaloader$ git init Initialized empty Git repository in /home/kali/kali/packages/instaloader/.git/ kali@kali:~/kali/packages/instaloader$ If we wanted to, we can confirm this by looking at “status” and “log”: kali@kali:~/kali/packages/instaloader$ git status On branch master No commits yet nothing to commit (create/copy files and use "git add" to track) kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git log fatal: your current branch 'master' does not have any commits yet kali@kali:~/kali/packages/instaloader$ Great. Everything is empty; we have a clean working area. We can now import the upstream version into our packing source code by using the file downloaded from wget before. Because of the filename format, gbp is able to detect the values instaloader as the package name, and 4.4.4 as the version. We just press enter to accept the default values: kali@kali:~/kali/packages/instaloader$ gbp import-orig ~/kali/upstream/instaloader_4.4.4.orig.tar.gz What will be the source package name? [instaloader] What is the upstream version? [4.4.4] gbp:info: Importing '/home/kali/kali/upstream/instaloader_4.4.4.orig.tar.gz' to branch 'upstream'... gbp:info: Source package is instaloader gbp:info: Upstream version is 4.4.4 gbp:info: Successfully imported version 4.4.4 of /home/kali/kali/upstream/instaloader_4.4.4.orig.tar.gz kali@kali:~/kali/packages/instaloader$ If we wanted to check everything is okay, once again, we can use git to do so: kali@kali:~/kali/packages/instaloader$ git status On branch master nothing to commit, working tree clean kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git log commit 494f71875f10f8d1da69a8edf2fc75300e4485b9 (HEAD -> master, tag: upstream/4.4.4, upstream) Author: Joseph O'Gorman <[email protected]> New upstream version 4.4.4 kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git branch -v * master 494f718 New upstream version 4.4.4 pristine-tar 439fe30 pristine-tar data for instaloader_4.4.4.orig.tar.gz upstream 494f718 New upstream version 4.4.4 kali@kali:~/kali/packages/instaloader$ So there is now an automatic commit created in the master branch (which is the current active branch, shown by the *), as well as two other branches: pristine-tar which is metadata from the import upstream which is the source code of the application, without any of our package modifications We are creating a Kali package, and we don’t use the master branch, but rather kali/master. So let’s switch: kali@kali:~/kali/packages/instaloader$ git checkout -b kali/master Switched to a new branch 'kali/master' kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git branch -D master Deleted branch master (was 494f718). kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git branch -v * kali/master 494f718 New upstream version 4.4.4 pristine-tar 439fe30 pristine-tar data for instaloader_4.4.4.orig.tar.gz upstream 494f718 New upstream version 4.4.4 kali@kali:~/kali/packages/instaloader$ Now we can generate the necessary files required to build a Debain-based package and also remove any example files created. During the process, we will be asked if its: Single binary Arch-Independent Library Python We are going to keep it simple, and go with “Single”. Then accept what’s on the screen with Y. If you would like more information about when to use what option, please see the manpage for dh_make.: kali@kali:~/kali/packages/instaloader$ dh_make --file ~/kali/upstream/instaloader_4.4.4.orig.tar.gz -p instaloader_4.4.4 Type of package: (single, indep, library, python) [s/i/l/p]? Maintainer Name : Joseph O'Gorman Email-Address : [email protected] Date : Thu, 02 Jul 2020 17:59:47 -0400 Package Name : instaloader Version : 4.4.4 License : blank Package Type : single Are the details correct? [Y/n/q] Currently there is not top level Makefile. This may require additional tuning Done. Please edit the files in the debian/ subdirectory now. kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ rm debian/*.docs debian/README* debian/*.ex debian/*.EX kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ rm -r debian/upstream kali@kali:~/kali/packages/instaloader$ We use --file to say where the orig.tar.gz file is. If the file was one directory back (../), this would not be needed, however as we have created a separate location for the file it is. If you would like to see what got generated when using dh_make, we can use git: kali@kali:~/kali/packages/instaloader$ git status On branch kali/master Untracked files: (use "git add <file>..." to include in what will be committed) debian/ nothing added to commit but untracked files present (use "git add" to track) kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ ls -R debian/ debian/: changelog control copyright rules source debian/source: format kali@kali:~/kali/packages/instaloader$ A quick overview of each of those files: changelog - tracks when the package gets an update (including why and by who). This is responsible for the package version control - is the metadata for the package (often seen with apt) copyright - what is under what license. The package can be under something different to the work we have put in to create the package rules - how to install the package source/format - is the source package format At this point, we have the base packaging files in place, and it feels like a good idea to commit before starting some real work: kali@kali:~/kali/packages/instaloader$ git add debian/ kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git commit -m "Initial packaging files" [kali/master 52042da] Initial packaging files 5 files changed, 93 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/control create mode 100644 debian/copyright create mode 100755 debian/rules create mode 100644 debian/source/format kali@kali:~/kali/packages/instaloader$ We now need to edit most of these to make sure the information is accurate. We can use what we found on GitHub to supply the correct info into the debian/ files: License Dependencies Maintainers Description Collecting Information License/Maintainers For this package, its straight forward. GitHub has given us a helping hand, and detected the license as MIT. We can also see there is a license file: kali@kali:~/kali/packages/instaloader$ cat LICENSE The MIT License (MIT) Copyright (c) 2016-2019 Alexander Graf and André Koch-Kramer. [...] kali@kali:~/kali/packages/instaloader$ Reading the license, we can see there are two authors which are given credit too: Alexander Graf and André Koch-Kramer. However, we don’t have a method of contact for them. We continue to explore the rest of the git repository, looking for something which may give us more authors so we can give credit to them. There isn’t a fixed structure in place, however there are some things to check and look out for: README* - authors may put contact information here A few examples could be: README, README.txt, README.MD README.MKDOCS, or Readme.txt AUTHOR* - They may have a dedicated file for author information CREDIT* - They may have a dedicated file to who they give credit to LICENSE* - Like mentioned above, the license file may give author information docs/ - They may place all their documentation in a separate folder The “main” starting point of the application may have comments at the top of the file - in this, instaloader.py Git commits - git --no-pager log -s --format="%ae" | sort -u For our package, we can see: kali@kali:~/kali/packages/instaloader$ ls AUTHORS.md debian deploy docs instaloader instaloader.py LICENSE Pipfile Pipfile.lock README.rst setup.py test kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ ls docs/ as-module.rst codesnippets contributing.rst installation.rst logo.svg requirements.txt _templates basic-usage.rst codesnippets.rst favicon.ico logo_heading.png Makefile sphinx_autodoc_typehints.py troubleshooting.rst cli-options.rst conf.py index.rst logo.png README.md _static kali@kali:~/kali/packages/instaloader$ As it turns out, there is: AUTHORS.md, docs/, instaloader.py, and README.rst, so we have a few places to look at. Starting with AUTHORS.md, we can see the authors name and their method of contact: kali@kali:~/kali/packages/instaloader$ cat AUTHORS.md Authors ======= Instaloader is written by - Alexander Graf (@aandergr) - André Koch-Kramer (@Thammus) - Lars Lindqvist (@e5150) kali@kali:~/kali/packages/instaloader$ So rather than an email address, it appears to be a username (could be just for GitHub, or a generic Internet handle). This is enough for us to go forward (even though its not ideal). Another trick we could try is looking to see if they used a “legit” email address with git: kali@kali:~/kali/packages/instaloader$ git clone https://github.com/instaloader/instaloader/ /tmp/instaloader kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cd /tmp/instaloader/ kali@kali:/tmp/instaloader$ git --no-pager log -s --format="%ae" | sort -u | grep -v '@users.noreply.github.com' [...] kali@kali:/tmp/instaloader$ kali@kali:/tmp/instaloader$ cd ~/kali/packages/instaloader/ kali@kali:~/kali/packages/instaloader$ It doesn’t appear so. Was worth a try! Dependencies/Maintainers We need to see what is required to be installed on the machine in order for the application to work. Either pre-installed or will be installed using the application. Some starting places to look at for this information: README* SETUP* INSTALL* docs/ There is a README for this application, but it just says how to install the application, rather than how to build it/compile from source: kali@kali:~/kali/packages/instaloader$ grep -C 3 -i install README.rst :: $ pip3 install instaloader $ instaloader profile [profile ...] kali@kali:~/kali/packages/instaloader$ Exploring the pip option is something we could do, but out of scope for this guide. Next we spot setup.py, which contains a lot of useful information: kali@kali:~/kali/packages/instaloader$ cat setup.py #!/usr/bin/env python3 [...] if sys.version_info < (3, 5): sys.exit('Instaloader requires Python >= 3.5.') requirements = ['requests>=2.4'] if platform.system() == 'Windows' and sys.version_info < (3, 6): requirements.append('win_unicode_console') [...] url='https://instaloader.github.io/', license='MIT', author='Alexander Graf, André Koch-Kramer', author_email='[email protected], [email protected]', description='Download pictures (or videos) along with their captions and other metadata ' 'from Instagram.', long_description=open(os.path.join(SRC, 'README.rst')).read(), install_requires=requirements, python_requires='>=3.5', [...] kali@kali:~/kali/packages/instaloader$ We managed to get the following information from this: From the shebang, we can see its Python 3 (#!/usr/bin/env python3). We can see it wants Python 3.5 or higher We can see it wants requests and for it to be v2.4 or higher We can see if its on Windows, it requires another dependency, but we are Linux, so not the case We can see the program’s home URL We can see the license (MIT) We can see the authors and their email addresses We can get a description of the program Handy! When packing, we are building a standalone package, which needs to be able to install offline. Something else which needs to be kept in mind, other systems package management systems, such as Python’s pip, or ruby’s gems. Any of these dependencies also need to be in the main OS package management. In our case, we need Python’s requests. We have two ways of searching for it. We can use either: pkg.kali.org apt-cache But we also need to know what we are searching for. There is a naming convention, but if you are un-sure, doing multiple searches may help: requests python-requests python3-requests We will stick with the command line option for the time being. Doing just requests gives a little too many results: kali@kali:~/kali/packages/instaloader$ apt-cache search requests | wc -l 561 kali@kali:~/kali/packages/instaloader$ So we need to do better to shorten the list, by just searching the short version of the description (we will cover this more later, but its the visible part of the output): kali@kali:~/kali/packages/instaloader$ apt-cache search --names-only requests | wc -l 19 kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ apt-cache search --names-only python-requests python-requests-cache-doc - persistent cache for requests library (doc) python-requests-doc - elegant and simple HTTP library for Python (Documentation) python-requests-mock-doc - mock out responses from the requests package - doc python-requests-oauthlib-doc - module providing OAuthlib auth support for requests (Common Documentation) python-requests-toolbelt-doc - Utility belt for python3-requests (documentation) kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ apt-cache search --names-only python-requests | grep -vi 'doc' kali@kali:~/kali/packages/instaloader$ After removing the documentation from the results, we don’t get any results. So on with the next search!: kali@kali:~/kali/packages/instaloader$ apt-cache search --names-only python3-requests python3-requests - elegant and simple HTTP library for Python3, built for human beings python3-requests-cache - persistent cache for requests library (Python 3) python3-requests-file - File transport adapter for Requests - Python 3.X python3-requests-futures - library for asynchronous HTTP requests (Python 3) python3-requests-kerberos - Kerberos/GSSAPI authentication handler for python-requests - Python 3.x python3-requests-mock - mock out responses from the requests package - Python 3.x python3-requests-ntlm - Adds support for NTLM authentication to the requests library python3-requests-oauthlib - module providing OAuthlib auth support for requests (Python 3) python3-requests-toolbelt - Utility belt for advanced users of python3-requests python3-requests-unixsocket - Use requests to talk HTTP via a UNIX domain socket - Python 3.x python3-requestsexceptions - import exceptions from bundled packages in requests. - Python 3.x kali@kali:~/kali/packages/instaloader$ The first result, python3-requests, looks exactly right! We can look closer: kali@kali:~/kali/packages/instaloader$ apt-cache show python3-requests Package: python3-requests Source: requests Version: 2.23.0+dfsg-2 [...] kali@kali:~/kali/packages/instaloader$ And we can see its version is 2.23.0, which is higher than than 2.4, so we don’t need to update the package. This will be covered in another guide when required. Maintainers While doing the other parts, we have discovered the authors and maintainers of the software, so we don’t need to do anything extra for this. Description There are two descriptions that we need to supply, a long description and a short description. When we look at the GitHub page we can see an about section that we can use for the short description. For the long description, we can use the description in the README. We also have a value from the setup.py. Editing Package Source Code Now that we have that information copied down, we can start to populate the files in the debian/ folder we created with dh_make. More information on the subject can be found on the Debian documentation. Changelog If we followed the documentation on setting up a packaging environment, the only values we will need to alter would be distribution (from UNRELEASED to kali-dev), version (from 4.4.4-1 to 4.4.4-0kali1) and the log entry: kali@kali:~/kali/packages/instaloader$ cat debian/changelog instaloader (4.4.4-1) UNRELEASED; urgency=medium * Initial release (Closes: #nnnn) <nnnn is the bug number of your ITP> -- Joseph O'Gorman <[email protected]> Thu, 02 Jul 2020 17:59:47 -0400 kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ vim debian/changelog kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/changelog instaloader (4.4.4-0kali1) kali-dev; urgency=medium * Initial release -- Joseph O'Gorman <[email protected]> Thu, 02 Jul 2020 17:59:47 -0400 kali@kali:~/kali/packages/instaloader$ Control This file is the metadata for the package, and contains a lot of information. More information on the subject can be found on the Debian documentation. Out of the box, it will look a little like this: kali@kali:~/kali/packages/instaloader$ cat debian/control Source: instaloader Section: unknown Priority: optional Maintainer: Joseph O'Gorman <[email protected]> Rules-Requires-Root: no Build-Depends: debhelper-compat (= 13) Standards-Version: 4.6.1 Homepage: <insert the upstream URL, if relevant> #Vcs-Browser: https://salsa.debian.org/debian/instaloader #Vcs-Git: https://salsa.debian.org/debian/instaloader.git Package: instaloader Architecture: any Depends: ${misc:Depends}, ${shlibs:Depends}, Description: <insert up to 60 chars description> <Insert long description, indented with spaces.> kali@kali:~/kali/packages/instaloader$ So we can see a few things that need updating: Section - we set this to be misc, or if we know for sure it should be another section based off of the sections in Debian testing we can set it to that section Maintainer - we switch to be the Kali team, rather than an individual Uploaders - this is the individual(s) who are responsible for packaging up the application Build-Depends - what packages are required to BUILD the package Homepage - where is the tool located on the Internet Vcs-Browser - package source code to view online Vcs-Git - package source code location Architecture - what machines can this work on Depends - what other packages are required for this package to work Description - short and long description Most of this we have now figured out from before, so it should make it easier to fill in. We went ahead and created a remote empty git repository on our GitLab account. In our example, this is the end result: kali@kali:~/kali/packages/instaloader$ vim debian/control kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/control Source: instaloader Section: misc Priority: optional Maintainer: Kali Developers <[email protected]> Uploaders: Joseph O'Gorman <[email protected]> Rules-Requires-Root: no Build-Depends: debhelper-compat (= 13), dh-python, python3-all, python3-requests, python3-setuptools, Standards-Version: 4.6.1 Homepage: https://instaloader.github.io/ Vcs-Browser: https://gitlab.com/kalilinux/packages/instaloader Vcs-Git: https://gitlab.com/kalilinux/packages/instaloader.git Package: instaloader Architecture: all Depends: python3-requests, ${misc:Depends}, ${python3:Depends}, Description: Download media along with their metadata from Instagram Downloads public and private profiles, hashtags, user stories, feeds and saved media Downloads comments, geotags and captions of each post. Automatically detects profile name changes and renames the target directory accordingly Allows fine-grained customization of filters and where to store downloaded media kali@kali:~/kali/packages/instaloader$ NOTE: The Build-Depends & Depends are indented with one space (and end with commas). The Description is also indentend with one space. There is a lot going on here, so lets point out a few things Something to keep in mind with the formatting of the long descriptions, at about every 70 characters in (to the nearest whole word), we would put a new line, to help keep the formatting under control. Now onto the dependencies, of which we have: Build & Package. For the build-dependencies of Python 3 we will have to have four things: debhelper-compat dh-python python3-all python3-setuptools In a separate guide there will be an explanation as to why these are included, however only the first two are going to be a staple of Python 3 packaging as the latter two are for more specific cases. In our application, we have another one, python3-requests, which we got from setup.py and that is a requirement from the application. Typically, if there was not a setup.py file, we would not need to include python3-requests in our “Build-Depends”. However, due to the setup.py file, we will need to include python3-requests in both the “Build-Depends” as well as the package “Depends”. This ensures these packages are always on the system when we install our package (especially handy when using “sbuild”). The debhelper-compat level determines how the package will be built. The higher the compat level, the newer the version. Newer versions have certain menial tasks done automatically, so this should not be lowered. The package dependencies are relatively straightforward. We get rid of the ${shlibs:Depends} as we are packaging up a Python tool, and instead replace it with the python3 depends version ${python3:Depends}. We also ensure that python3-requests is included as the tool requires this. No other dependencies are needed by this tool, so we are done. The final thing we need to ensure we change is the architecture from any to all, as this tool can be installed on all architectures. Copyright Everything that gets created has an original author. They control what happens with it and it needs to be respected. We can call out this in the copyright file. More information on the subject can be found on the Debian documentation and here. Below is the skeleton template output (with comments removed): kali@kali:~/kali/packages/instaloader$ grep -v '#' debian/copyright Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Source: <url://example.com> Upstream-Name: instaloader Upstream-Contact: <preferred name and address to reach the upstream project> Files: * Copyright: <years> <put author's name and email here> <years> <likewise for another author> License: <special license> <Put the license of the package here indented by 1 space> <This follows the format of Description: lines in control file> . <Including paragraphs> Files: debian/* Copyright: 2020 Joseph O'Gorman <[email protected]> License: GPL-2+ This package is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. . This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. . You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/> Comment: On Debian systems, the complete text of the GNU General Public License version 2 can be found in "/usr/share/common-licenses/GPL-2". kali@kali:~/kali/packages/instaloader$ The original tool’s author has ownership on their work, and the work we have put into creating the package belongs to us. After updating it, it looks like the following: kali@kali:~/kali/packages/instaloader$ vim debian/copyright kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/copyright Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Source: https://github.com/instaloader/instaloader Upstream-Name: instaloader Files: * Copyright: 2016-2020 Alexander Graf <[email protected]> 2016-2020 André Koch-Kramer <[email protected]> License: MIT Files: debian/* Copyright: 2020 Joseph O'Gorman <[email protected]> License: MIT License: MIT The MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: . The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. . THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. kali@kali:~/kali/packages/instaloader$ We altered the following: We removed an optional parameter (Upstream-Contact), as that is touched on in the copyright file. We put in the homepage of the application to Source Put the two authors name and addresses from setup.py. The dates came from the LICENSE file Rather than putting the whole block of MIT license text directly after, we placed it towards the end of the file, and gave a header to it. We replaced the GPL-2+ used as default for the packaging section with the same MIT license, which is used in the application. This is the standard for Debian packages (packaging work should match the application’s license). Rules This file is a Makefile, for building the Debian package. More information on the subject can be found on the Debian documentation. The output of the template looks like the following: kali@kali:~/kali/packages/instaloader$ cat debian/rules #!/usr/bin/make -f # See debhelper(7) (uncomment to enable). # Output every command that modifies files on the build system. #export DH_VERBOSE = 1 # See FEATURE AREAS in dpkg-buildflags(1). #export DEB_BUILD_MAINT_OPTIONS = hardening=+all # See ENVIRONMENT in dpkg-buildflags(1). # Package maintainers to append CFLAGS. #export DEB_CFLAGS_MAINT_APPEND = -Wall -pedantic # Package maintainers to append LDFLAGS. #export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed %: dh $@ # dh_make generated override targets. # This is an example for Cmake (see <https://bugs.debian.org/641051>). #override_dh_auto_configure: # dh_auto_configure -- \ # -DCMAKE_LIBRARY_PATH=$(DEB_HOST_MULTIARCH) kali@kali:~/kali/packages/instaloader$ So there are a lot of items which are pre-commented out, that may be handy for debugging & troubleshooting. Other than the shebang (#!/usr/bin/make -f), there is only two other lines which are currently in use: %: dh $@ Which is a wildcard (%), and feed in all the arguments into dh. What needs to go here now starts to depend on the program and how complex it is. As our program is a python application we are going to have to tell it to build with python3. We also need to tell it to use pybuild to build, as we have a setup.py file included in the source of the application. If there was not a setup.py file, we would not add this flag. We also need to tell PyBuild the name of the application. This looks like: kali@kali:~/kali/packages/instaloader$ vim debian/rules kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/rules #!/usr/bin/make -f #export DH_VERBOSE = 1 export PYBUILD_NAME = instaloader %: dh $@ --with python3 --buildsystem=pybuild kali@kali:~/kali/packages/instaloader$ NOTE: It uses TAB for indentation, as its a Makefile. Watch An additional file which we highly recommend to include is a watch file. This points to upstream, and is then used to detect if there is a more recent version of the application than what is packaged. This is useful when doing updates to packages. For more information, and example formats, please see the Debian wiki. Using this wiki, we can see there is an example for GitHub which is where our project is stored - github.com/instaloader/instaloader/: version=4 opts=filenamemangle=s/.+\/v?(\d\S+)\.tar\.gz/<project>-$1\.tar\.gz/ \ https://github.com/<user>/<project>/tags .*/v?(\d\S+)\.tar\.gz So lets now alter it to fit our needs: kali@kali:~/kali/packages/instaloader$ vim debian/watch kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/watch version=4 opts=filenamemangle=s/.+\/v?(\d\S+)\.tar\.gz/instaloader-$1\.tar\.gz/ \ https://github.com/instaloader/instaloader/tags .*/v?(\d\S+)\.tar\.gz kali@kali:~/kali/packages/instaloader$ NOTE: This has two spaces for any indentation So let’s do a quick check to see if its working right: kali@kali:~/kali/packages/instaloader$ uscan -vv --no-download [...] uscan info: Found the following matching hrefs on the web page (newest first): /instaloader/instaloader/archive/v4.4.4rc3.tar.gz (4.4.4rc3) index=4.4.4rc3-1 /instaloader/instaloader/archive/v4.4.4rc2.tar.gz (4.4.4rc2) index=4.4.4rc2-1 /instaloader/instaloader/archive/v4.4.4rc1.tar.gz (4.4.4rc1) index=4.4.4rc1-1 /instaloader/instaloader/archive/v4.4.4.tar.gz (4.4.4) index=4.4.4-1 /instaloader/instaloader/archive/v4.4.3.tar.gz (4.4.3) index=4.4.3-1 [...] $newversion = 4.4.4rc3 $lastversion = 4.4.4 [...] uscan: Newest version of instaloader on remote site is 4.4.4rc3, local version is 4.4.4 uscan: => Newer package available from https://github.com/instaloader/instaloader/archive/v4.4.4rc3.tar.gz uscan info: Scan finished kali@kali:~/kali/packages/instaloader$ Looks like its not! Its correctly detected all the versions, but its not sorted the order correctly (due to the release candidate). We know this by going to the release page: Looking back at the Debian wiki, there is a section called Common mistakes: Not mangling upstream versions that are alphas, betas or release candidates to make them sort before the final release. The solution is to use “uversionmangle” like this: opts=uversionmangle=s/(\d)[_\.\-\+]?((RC|rc|pre|dev|beta|alpha)\d*)$/$1~$2/ However, we need to edit it a bit to fit Instaloader. This can be figured out through trial and error using the above uversionmangle as the base. Let’s see how it works: kali@kali:~/kali/packages/instaloader$ vim debian/watch kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/watch version=4 opts=uversionmangle=s/(\d)[_\.\-\+]?((RC|rc|pre|dev|beta|alpha|a)\d*)$// \ https://github.com/instaloader/instaloader/tags .*/v?(\d\S+)\.tar\.gz kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ uscan -vv --no-download [...] uscan info: Newest version of instaloader on remote site is 4.4.4, local version is 4.4.4 uscan info: => Package is up to date for from https://github.com/instaloader/instaloader/archive/v4.4.4.tar.gz uscan info: Scan finished kali@kali:~/kali/packages/instaloader$ Success! .Install & Helper-Scripts Everything we have done so far would just be for building the package, but we haven’t said how to install the application: kali@kali:~/kali/packages/instaloader$ vim debian/instaloader.install kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/instaloader.install instaloader.py usr/share/instaloader/ instaloader usr/share/instaloader/ kali@kali:~/kali/packages/instaloader$ NOTE: There is no leading slash in the target directory We can go forward with this, but it may not behave like we were expecting. This is because we don’t have anything in $PATH, so if we went to the command line and tried typing in instaloader.py its not going to work (also it has the file extension, .py). The solution is to create a helper-script, which is placed into $PATH (and we include in the .install file): kali@kali:~/kali/packages/instaloader$ mkdir -p debian/helper-script/ kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ vim debian/helper-script/instaloader kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/helper-script/instaloader #!/bin/sh exec python3 /usr/share/instaloader/instaloader.py "$@" kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ vim debian/instaloader.install kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/instaloader.install instaloader.py usr/share/instaloader/ instaloader usr/share/instaloader/ debian/helper-script/instaloader usr/bin/ kali@kali:~/kali/packages/instaloader$ With that, all the necessary debian/ files are added. Time to build! Packing Up Time to bundle everything into a file. We are going to use sbuild to create the package. This has its pros and cons. One of the pros is that if it builds here, it will also build elsewhere as its meant for build daemons. The down side is, it will require access to a network repository as it will try and handle detecting and installing any dependencies missing in the chroot, making it slower to build. If you don’t want to use sbuild, just drop it from the arguments (e.g. gbp buildpackage), However, you will be required then to install what’s in debian/control in the Build-Depends section (e..g sudo apt install -y dh-python python3-all python3-setuptools python3-requests). So lets give sbuild a try: kali@kali:~/kali/packages/instaloader$ gbp buildpackage --git-builder=sbuild gbp:error: Can't determine package type: Failed to read changelog: can't get HEAD:debian/changelog: fatal: path 'debian/changelog' exists on disk, but not in 'HEAD' kali@kali:~/kali/packages/instaloader$ Oops! We haven’t committed our changes to git. Shame on us. If we wanted to, we could bypass this by doing gbp buildpackage --git-builder=sbuild --git-export=WC, which would allow us to test out our values in debian/ before committing to it, rather than cluttering up the git history with various debugging/troubleshooting commits. Then when we have our package in a working state, we can then commit to git, and try again, like so: kali@kali:~/kali/packages/instaloader$ git status On branch kali/master Untracked files: (use "git add <file>..." to include in what will be committed) debian/ nothing added to commit but untracked files present (use "git add" to track) kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git add debian/ kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git commit -m "Initial release" [kali/master 10a9e96] Add debian/ files 8 files changed, 94 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/control create mode 100644 debian/copyright create mode 100755 debian/helper-script/instaloader create mode 100644 debian/instaloader.install create mode 100755 debian/rules create mode 100644 debian/source/format create mode 100644 debian/watch kali@kali:~/kali/packages/instaloader$ Let’s try and build again: NOTE: You may not get the following error (as it depends on how “clean” your OS is): kali@kali:~/kali/packages/instaloader$ gbp buildpackage --git-builder=sbuild gbp:info: Exporting 'HEAD' to '/home/kali/kali/build-area/instaloader-tmp' gbp:info: Moving '/home/kali/kali/build-area/instaloader-tmp' to '/home/kali/kali/build-area/instaloader-4.4.4' gbp:info: Performing the build dh clean --with python3 --buildsystem=pybuild dh: error: unable to load addon python3: Can't locate Debian/Debhelper/Sequence/python3.pm in @INC (you may need to install the Debian::Debhelper::Sequence::python3 module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.30.3 /usr/local/share/perl/5.30.3 /usr/lib/x86_64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl) at (eval 25) line 1. BEGIN failed--compilation aborted at (eval 25) line 1. make: *** [debian/rules:7: clean] Error 255 E: Failed to clean source directory /home/kali/kali/build-area/instaloader-4.4.4 (/home/kali/kali/build-area/instaloader_4.4.4-0kali1.dsc) gbp:error: 'sbuild' failed: it exited with 1 kali@kali:~/kali/packages/instaloader$ If you see the above error, this is because dh-python is missing from our OS. We can quickly fix this by doing: kali@kali:~/kali/packages/instaloader$ sudo apt install -y dh-python kali@kali:~/kali/packages/instaloader$ So, one more try and building: kali@kali:~/kali/packages/instaloader$ gbp buildpackage --git-builder=sbuild gbp:info: Exporting 'HEAD' to '/home/kali/kali/build-area/instaloader-tmp' gbp:info: Moving '/home/kali/kali/build-area/instaloader-tmp' to '/home/kali/kali/build-area/instaloader-4.4.4' gbp:info: Performing the build dh clean --with python3 --buildsystem=pybuild [...] +------------------------------------------------------------------------------+ | Package contents | +------------------------------------------------------------------------------+ [...] Install lintian build dependencies (apt-based resolver) ------------------------------------------------------- [...] E: instaloader source: source-is-missing [docs/_static/bootstrap-4.1.3.bundle.min.js] W: instaloader: no-manual-page [usr/bin/instaloader] E: Lintian run failed (runtime error) [...] +------------------------------------------------------------------------------+ | Summary | +------------------------------------------------------------------------------+ Build Architecture: amd64 Build Type: full Build-Space: 2460 Build-Time: 5 Distribution: kali-dev Host Architecture: amd64 Install-Time: 37 Job: /home/kali/kali/build-area/instaloader_4.4.4-0kali1.dsc Lintian: error Machine Architecture: amd64 Package: instaloader Package-Time: 45 Source-Version: 4.4.4-0kali1 Space: 2460 Status: successful Version: 4.4.4-0kali1 -------------------------------------------------------------------------------- Finished at 2020-07-03T15:51:09Z Build needed 00:00:45, 2460k disk space kali@kali:~/kali/packages/instaloader$ The output here is very long, so we have truncated it, but we can see its being built successfully. Even with an error, warning, and information from lintian! Let’s double check to see what got created: kali@kali:~/kali/packages/instaloader$ ls ~/kali/build-area/instaloader* /home/kali/kali/build-area/instaloader_4.4.4-0kali1_all.deb /home/kali/kali/build-area/instaloader_4.4.4-0kali1.debian.tar.xz /home/kali/kali/build-area/instaloader_4.4.4-0kali1_amd64.build /home/kali/kali/build-area/instaloader_4.4.4-0kali1.dsc /home/kali/kali/build-area/instaloader_4.4.4-0kali1_amd64.buildinfo /home/kali/kali/build-area/instaloader_4.4.4.orig.tar.gz /home/kali/kali/build-area/instaloader_4.4.4-0kali1_amd64.changes kali@kali:~/kali/packages/instaloader$ We have output! Making Lintian Happy For more information, see Debian’s documentation. Let’s try to understand the error E: instaloader source: source-is-missing [docs/_static/bootstrap-4.1.3.bundle.min.js]: kali@kali:~/kali/packages/instaloader$ lintian-explain-tags source-is-missing N: E: source-is-missing N: N: The source of the following file is missing. Lintian checked a few possible paths to find the source, and did not find it. N: N: Please repack your package to include the source or add it to "debian/missing-sources" directory. N: N: Please note, that very-long-line-length-in-source-file tagged files are likely tagged source-is-missing. It is a feature not a bug. N: N: Visibility: error N: Show-Always: no N: Check: files/source-missing N: kali@kali:~/kali/packages/instaloader$ The issue with the file docs/_static/bootstrap-4.1.3.bundle.min.js is that it’s a minified Javascript. It’s not human readable, it can’t be modified, therefore it’s not considered as a source file. As Lintian suggests, we can provide the source for this file in debian/missing-sources, however as we do not have the source handy we should explore other options. For this guide we will focus on Lintian overrides. Because the offending file is in the docs directory, which, if we investigate the included files, is what Instaloader uses to host their documentation site pages, we can ignore this file and tell Lintian to as well. To do this we will create the file instaloader.lintian-overrides located in the debian directory. From here we can copy and paste the error message from the : on. While we are here, we may as well ignore the warning about no-manual-page. Here is our resulting file: kali@kali:~/kali/packages/instaloader$ vim debian/instaloader.lintian-overrides kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/instaloader.lintian-overrides source-is-missing [docs/_static/bootstrap-4.1.3.bundle.min.js] no-manual-page [usr/bin/instaloader] kali@kali:~/kali/packages/instaloader$ We can now commit our changes and rebuild the package with the same command and see if it was successful with no error: kali@kali:~/kali/packages/instaloader$ gbp buildpackage --git-builder=sbuild [...] +------------------------------------------------------------------------------+ | Package contents | +------------------------------------------------------------------------------+ [...] Install lintian build dependencies (apt-based resolver) ------------------------------------------------------- [...] Running lintian... E: instaloader source: source-is-missing [docs/_static/bootstrap-4.1.3.bundle.min.js] I: instaloader: unused-override source-is-missing [docs/_static/bootstrap-4.1.3.bundle.min.js] [usr/share/lintian/overrides/instaloader:2] N: 0 hints overridden; 1 unused override E: Lintian run failed (runtime error) [...] kali@kali:~/kali/packages/instaloader$ Uh oh! It looks like it still failed, and that it didn’t even use our override for the source-is-missing error! If we look closer, we can see a difference between the previous warning we were getting about no-manual-page and source-is-missing. The section before the :, telling us the level of error and the package name, includes source in our source-is-missing error. This is because the issue resides in the source package, or the imported package, rather than the output, or the binary package. To solve this we will need to create a new source directory in debian/ and a new lintian-overrides file. Lets do that now: kali@kali:~/kali/packages/instaloader$ mkdir debian/source/ kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ vim debian/source/lintian-overrides kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/source/lintian-overrides source-is-missing [docs/_static/bootstrap-4.1.3.bundle.min.js] kali@kali:~/kali/packages/instaloader$ Don’t forget to remove the override from our previous file as well! kali@kali:~/kali/packages/instaloader$ vim debian/instaloader.lintian-overrides kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ cat debian/instaloader.lintian-overrides no-manual-page [usr/bin/instaloader] kali@kali:~/kali/packages/instaloader$ We can once more commit our changes and rebuild the package and see it was successful with no error: kali@kali:~/kali/packages/instaloader$ gbp buildpackage --git-builder=sbuild [...] +------------------------------------------------------------------------------+ | Package contents | +------------------------------------------------------------------------------+ [...] Install lintian build dependencies (apt-based resolver) ------------------------------------------------------- [...] Running lintian... I: Lintian run was successful. [...] kali@kali:~/kali/packages/instaloader$ Manual Install Let’s now give our package a test drive: kali@kali:~/kali/packages/instaloader$ sudo apt install ~/kali/build-area/instaloader_4.4.4-0kali1_all.deb Selecting previously unselected package instaloader. (Reading database ... 154513 files and directories currently installed.) Preparing to unpack .../instaloader_4.4.4-0kali1_all.deb ... Unpacking instaloader (4.4.4-0kali1) ... Setting up instaloader (4.4.4-0kali1) ... Processing triggers for kali-menu (2020.3.0) ... kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ instaloader usage: instaloader.py [--comments] [--geotags] [--stories] [--highlights] [--tagged] [--igtv] [--login YOUR-USERNAME] [--fast-update] profile | "#hashtag" | %%location_id | :stories | :feed | :saved instaloader.py --help kali@kali:~/kali/packages/instaloader$ Success! This is looking good (not perfect), and eagle eye spotters may be able to spot why (in the output) - there is the file extension in the output (instaloader.py), yet the command used to call it doesn’t have it (instaloader). We are going to need to either patch the application or find a different way to call the application to address this. But this will be covered in another guide. Save & Share Let’s now make sure everything is in git locally, before we push it out to the remote repository (the one we defined in debian/control): kali@kali:~/kali/packages/instaloader$ git status On branch kali/master nothing to commit, working tree clean kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git branch -v * kali/master 10a9e96 Use lintian-override to clear all warnings and errors pristine-tar 439fe30 pristine-tar data for instaloader_4.4.4.orig.tar.gz upstream 494f718 New upstream version 4.4.4 kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git remote -v kali@kali:~/kali/packages/instaloader$ We don’t (yet) have a remote repository setup, so we go to GitLab and create a new project before continuing. Afterwards: kali@kali:~/kali/packages/instaloader$ git remote add origin [email protected]:kalilinux/packages/instaloader.git kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git remote -v origin [email protected]:kalilinux/packages/instaloader.git (fetch) origin [email protected]:kalilinux/packages/instaloader.git (push) kali@kali:~/kali/packages/instaloader$ Now we need to send our local work to the new remote repository (and don’t forget the tags): kali@kali:~/kali/packages/instaloader$ git push --all Enumerating objects: 95, done. Counting objects: 100% (95/95), done. Delta compression using up to 2 threads Compressing objects: 100% (88/88), done. Writing objects: 100% (95/95), 291.16 KiB | 7.46 MiB/s, done. Total 95 (delta 1), reused 0 (delta 0), pack-reused 0 remote: remote: To create a merge request for pristine-tar, visit: remote: https://gitlab.com/kalilinux/packages/instaloader/-/merge_requests/new?merge_request%5Bsource_branch%5D=pristine-tar remote: remote: To create a merge request for upstream, visit: remote: https://gitlab.com/kalilinux/packages/instaloader/-/merge_requests/new?merge_request%5Bsource_branch%5D=upstream remote: To gitlab.com:kalilinux/packages/instaloader.git * [new branch] kali/master -> kali/master * [new branch] pristine-tar -> pristine-tar * [new branch] upstream -> upstream kali@kali:~/kali/packages/instaloader$ kali@kali:~/kali/packages/instaloader$ git push --tags Enumerating objects: 1, done. Counting objects: 100% (1/1), done. Writing objects: 100% (1/1), 172 bytes | 172.00 KiB/s, done. Total 1 (delta 0), reused 0 (delta 0), pack-reused 0 To gitlab.com:kalilinux/packages/instaloader.git * [new tag] upstream/4.4.4 -> upstream/4.4.4 kali@kali:~/kali/packages/instaloader$
8585
dbpedia
3
27
https://betterscientificsoftware.github.io/python-for-hpc/tutorials/python-pypi-packaging/
en
Python: Creating a pip installable package
[ "https://betterscientificsoftware.github.io/python-for-hpc//images/day_and_night.svg" ]
[]
[]
[ "" ]
null
[ "Stephen Hudson" ]
null
Creating a pip installable package for PyPI
en
Python for HPC: Community Materials
https://betterscientificsoftware.github.io//python-for-hpc/tutorials/python-pypi-packaging/
Introduction What is pip? Creating a Python package Creating a source distribution Creating a wheel distribution Testing and Publishing package on PyPI Uploading to testpypi Uploading to PyPI Downloading tarball without install Example projects Feedback Introduction This is a quickstart guide to Python Packaging with a particular focus on the creation of a PyPI package, which will enable users to “pip install” the package. The document is broken down into sections so that readers may easily skips parts of the process they are already familiar with. All but the final section (Uploading to PyPI), can be undertaken as an exercise to understand Python packaging and test the process, without publishing a package on the formal PyPI distribution. For a more detailed reference on package creation, see the official Python Packaging Authority (PyPA) website. Note: PyPI should be pronounced “pie P I” to avoid confusion with pypy (a Python implementation). What is pip? pip is a package management system, specifically designed for installing Python packages from from the internet hosted Python Package Index (commonly known as PyPI). It is the most common way to install Python packages. E.g. The package can now be imported in Python scripts. You may need to run as sudo if you have root privileges, or append --user to install under your home directory (often this will be under $HOME/.local). Note: pip3 is used to install Python3 packages, however in some environments the command pip may point to pip3, just as python may point to Python3. You can use which pip to check this. For this document, examples will show the command simply as pip. Tip: To download a specific version of a package: Tip: To find out what versions are available: This is essentially trying to install a version that does not exist and causes pip to list available versions. Tip: To see what version is currently installed: Package information, including install location, can be obtained by running the Python interpreter: Installing pip is easy: https://pip.pypa.io/en/stable/installing Creating a Python package This article gives an overview of how to create an installable Python package. Note on Ambiguity: The term package can refer to an installable python package within a project (a directory containing an __init__.py file). It can also mean a distribution package which refers to the entire distributed part of the project (as in a source distribution - or “tarball”). Such a package may consist of multiple python package/sub-packages. In most cases the context should be sufficient to make the distinction. A Python project will consist of a root directory with the name of the project. Somewhere inside this will be included a directory which will constitute the main installable package. Most often this has the same name as the project (this is not compulsory but makes things a bit simpler). Inside that package directory, alongside your python files, create a file called __init__.py. This file can be empty, and it denotes the directory as a python package. When you pip install, this directory will be installed and become importable. E.g. A simple project may have this structure: pyexample ├── LICENSE ├── pyexample │ ├── __init__.py │ ├── module_mpi4py_1.py │ ├── module_numpy_1.py │ └── module_numpy_2.py ├── README.rst └── setup.py At the root directory, you will need a setup.py file, which will govern the installation of your package. The setuptools package is recommended for this (the in-built distutils is an older alternative). The main requirement in setup.py is to call the setup routine, providing project information as keyword arguments. A lot of information can be provided, but the following is a minimalist example. Further information on setup options can be found at: PyPA packaging instructions and yet more detailed and up to date information at: The setuptools command reference: The classifiers are not functional, they are for documentation, and will be listed on the PYPI page, once uploaded. It is conventional to include the Python versions supported in this release. A complete list of classifiers is available at: PyPI classifiers list Having created a setup.py, test the install with pip. In root dir: This is recommended in place of the default python setup.py install which uses easy_install. If you have an existing install, and want to ensure package and dependencies are updated use --upgrade To uninstall (use package name): Note: A reliable clean uninstall is one advantage of using setuptools over distutils. It is worth noting that the version in your setup.py will not provide the package attribute __version__. A common place to provide this along with other meta-data for the package is inside the __init__.py. This is run whenever the module is imported. E.g: __init__.py may contain: If you now pip install again and run the Python interpreter you should be able to access these variables: This does create the problem of having two places holding the version, which must also match any release tags created (eg. in git). Various approaches exist for using a single version number. See https://packaging.python.org/guides/single-sourcing-package-version If you wish to create sub-packages, these should ideally be directories inside the main package (Re-mapping from other locations is possible using the package_dir argument in setup but this can cause a problem with develop installs. The sub-packages also require an __init__.py in the directory. Creating a source distribution It is recommended that all Python projects provide a source distribution. PyPI has certain required meta-data that the setup.py should provide. To quickly check if your project has this data use: If nothing is reported your package is acceptable. Create a source distribution. From your root directory: This creates a dist/ directory containing a compressed archive of the package (e.g. <PACKAGE_NAME>-<VERSION>.tar.gz in Linux). This file is your source distribution. If it does not automatically contain what you want, then you might consider using a MANIFEST file (see https://docs.python.org/distutils/sourcedist). Note: A <PACKAGE_NAME>.egg-info directory will also be created in your root directory containing meta-data about your distribution. This can safely be deleted if it is not wanted (despite the extension, this is generated even though you have not built an egg format package). Creating a wheel distribution Optionally you may create a wheel distribution. This is a built distribution for the current platform. Wheels should be used in place of the older egg format. Bear in mind, any extensions will be built for the given platform and as such this must be consistent with any other project dependencies. Wheels will speed up installation if you have compiled code extensions as the build step is not required. If you do not have the wheel package you can pip install it. There are different types of wheels. However, if your project is pure python and python2/3 compatible create a universal wheel: If it is not python2/3 compatible or contains compiled extensions just use: The installable wheel will be created under the dist/ directory. A build directory will also be created with the built code. Further details for building wheels can be found here: https://packaging.python.org/tutorials/distributing-packages Testing and Publishing package on PyPI Distributing the package on PyPI will enable anyone on-line to pip install the package. First you must set up an account on PyPI. If you are going to test your package on the PyPI test site you will need to set up an account there also. This is easy. Create an account on PYPI: Go to: https://pypi.python.org and select Register. Follow instructions. Create an account on testpypi: Go to: https://testpypi.python.org and select Register. Follow instructions. You will also need a version number. Semantic versioning is recommended (see https://semver.org for details). The standard starting version for a project in development is 0.1.0. The best approach to uploading to PyPI is to use twine. IMPORTANT: First you can test your upload using the PyPI test site. It is highly recommended that you do this and test installing your package as below. NOTE: Once you upload a package to PYPI it is possible to remove it, but you cannot upload another package with the same version number – this breaks the version contract. It is therefore, especially prudent to test with testpypi first. Note that anything you put on testpypi should be considered disposable as the site regularly prunes content. Uploading to testpypi This section shows how to upload a source distribution of your package. Further documentation at: https://packaging.python.org/guides/using-testpypi Note: This link includes the option of using a pypirc file to abbreviate some of the command lines below. A source distribution provides everything needed to build/install the package on any supported platform. Testsuites, documentation and supporting data can also be included. You can now upload your package to testpypi as follows. Assuming your source distribution under dist/ is called pyexample-0.1.0.tar.gz: Alternatively, the following line will upload all your generated distrbutions under the dist/ directory. This may be used if you create wheels (see below) in addition to a source distribution. You will be requested to give your username and password for your testpypi account. Option: You have the option to digitally sign your package when you upload. You will need a gpg key set up to do this. It should be noted, however, that pip does not currently check gpg signatures when installing - this has to be done manually. To digitally sign using your gpg key (e.g. for package pyexample at version 0.1.0): A file pyexample-0.1.0.tar.gz.asc will be created. Now upload: Note: --detach-sign means you are writing the signature into a separate file *.asc The package should now be uploaded to: https://testpypi.python.org/pypi Note how the info/classifiers you supplied in setup.py are shown on the page. You can now test pip install from the command line. E.g. To install package pyexample into your user install space: Uploading to PyPI Once you are happy with the repository in testpyi, uploading to PYPI will be the same command line process, but without having to specify the url arguments. For example, the steps above would simply become: E.g. To upload all distributions created under dist/ E.g. To upload the source distribution with a gpg signature: You package should now be uploaded to: https://pypi.python.org/pypi The package should pip install. E.g: It is also recommended that you use virtual environments to test installing dependencies from scratch and for trying out different python versions. Check required flags to ensure your environment is isolated. E.g. For Virtualenv use the flag --no-site-packages. For Conda, set the environment variable export PYTHONNOUSERSITE=1 before activating you environment. Packages that are explicitly linked through PYTHONPATH will still be found however. Downloading tarball without install To test downloading a source distribution (no install) with dependencies: Or just the package without dependencies: Downloading the source distribution is a good way to check that it includes what you want by default. If not, then consider adding a MANIFEST file, which instructs setuptools what to include in the source distribution. Example projects pyexample: A small sample project using numpy and mpi4py (used as example above). Location: Github Note: To run the mpi4py test use at least 2 processors: mpiexec -np 2 python module_mpi4py_1.py libEnsemble: An Argonne project for running ensembles of calculations. Location: Github PyPI Related content includes: setup.py includes mapping a different source directory structure to packages and sub-packages using the package_dir setup argument. Use of a MANIFEST file to specify source distribution. Feedback Any feedback/corrections/additions are welcome: If this was helpful, please leave a star on the github page. Leave a comment below. Email: shudson@anl.gov Or fork on github and make a pull request
8585
dbpedia
2
84
https://www.facebook.com/photo.php%3Ffbid%3D331056969881121%26id%3D100089304610635%26set%3Da.118847011102119%26locale%3Del_GR
en
Facebook
https://static.xx.fbcdn.net/rsrc.php/yv/r/B8BxsscfVBr.ico
https://static.xx.fbcdn.net/rsrc.php/yv/r/B8BxsscfVBr.ico
[ "https://facebook.com/security/hsts-pixel.gif?c=3.2" ]
[]
[]
[ "" ]
null
[]
null
Sieh dir auf Facebook Beiträge, Fotos und vieles mehr an.
de
https://static.xx.fbcdn.net/rsrc.php/yv/r/B8BxsscfVBr.ico
https://www.facebook.com/login/
8585
dbpedia
1
12
https://docs.veracode.com/r/About_auto_packaging
en
About auto-packaging
https://docs.veracode.co…code-favicon.png
https://docs.veracode.co…code-favicon.png
[ "https://docs.veracode.com/img/Veracode_Docs_Logo_Light_Mode.svg", "https://docs.veracode.com/img/Veracode_Docs_Logo_Dark_Mode.svg" ]
[]
[]
[ "" ]
null
[]
2024-08-08T19:40:14+00:00
Veracode auto-packaging automates the process of packaging your projects for Static Analysis and Software Composition Analysis (SCA) upload and scan. By automating packaging, you can reduce the burden on your teams to correctly package projects manually, while also ensuring more accurate and consistent scan results.
en
/img/veracode-favicon.png
https://docs.veracode.com/r/About_auto_packaging
Veracode auto-packaging automates the process of packaging your projects for Static Analysis and Software Composition Analysis (SCA) upload and scan. By automating packaging, you can reduce the burden on your teams to correctly package projects manually, while also ensuring more accurate and consistent scan results. Saves time and effort, compared to manual packaging, by eliminating manual steps, such as gathering files and dependencies, configuring build settings, and packaging artifacts. Ensures a consistent build process across different environments and platforms. This reduces the risk of discrepancies or errors that can occur when developers manually change the build configurations or there are variations across the configurations. Reduces human errors that can occur when developers package projects manually. This improves the accuracy and reliability of the generated artifacts, which ensures that the Static Analysis results are accurate. Enables scalability by facilitating the rapid and efficient generation of artifacts for analysis across multiple code repositories, projects, or teams. This scalability is essential for organizations managing large and complex codebases. Reduces the time and resources developers spend securing their code, which allows them to focus on writing new code, implementing features, or addressing critical issues. Developers can increase their productivity and accelerate the time-to-market for software products and updates. The auto-packager runs on your repository to package your projects into artifacts (archive files) that you can upload to the Veracode Platform. To correctly package a project for Static Analysis or SCA upload and scan, the auto-packager automatically detects the required components and configurations for each supported language. The auto-packager packages your projects into archive files, such as ZIP, JAR, WAR or EAR, called artifacts. During the packaging process, the auto-packager might create multiple artifacts that it includes in the final artifacts. For example, multiple DLL files inside the final ZIP file. The final artifacts are the complete, packaged archive files that you can upload to Veracode and scan separately. The following table lists examples of the filename format of the final artifacts for each supported language. Artifact languageLanguage tagLanguage suffix tagExample filename.NET assembliesdotnetNoneveracode-auto-pack-Web-dotnet.zip.NET with JavaScriptdotnetjsveracode-auto-pack-Web-dotnet-js.zipAndroidNoneNoneThe gradle.build file defines the filenames of Java artifacts.COBOLcobolNoneveracode-auto-pack-EnterpriseCOBOLv6.3-cobol.zipC/C++ Linuxc_cppNoneveracode-auto-pack-CppProjectLibsAndExecutables-c_cpp.zipC/C++ WindowsmsvcNoneveracode-auto-pack-$(SolutionName)-msvc.zipDart and FlutterNoneNoneThe project configuration for Flutter Android or Xcode defines the filenames.GogoNoneveracode-auto-pack-evil-app-go.zipiOS with Xarchiveiosxcarchiveveracode-auto-pack-duckduckgo-ios-xcarchive.zipiOS with CocoaPodsiospodfileveracode-auto-pack-signal-ios-podfile.zipJava with GradleNoneNoneDefined by your gradle.build file.Java with MavenNoneNoneDefined by your pom.xml file.JavaScriptjsNoneveracode-auto-pack-NodeGoat-js.zipKotlinNoneNoneThe filenames of Java artifacts are defined by your gradle.build file.PerlperlNoneveracode-auto-pack-bugzilla-perl.zipPHPphpNoneveracode-auto-pack-captainhook-php.zipPythonpythonNoneveracode-auto-pack-dvsa-python.zipReact NativejsNoneveracode-auto-pack-convene-js.zipRubyrubyNoneveracode-auto-pack-railsgoat-ruby.zipScalaNoneNoneThe filenames of Java artifacts are defined by your SBT build properties. Auto-packaging is integrated with the following products: Veracode CLI to integrate auto-packaging in your development environment. Veracode GitHub Workflow Integration to automate repo scanning with GitHub Actions. The auto-packager only supports Java, JavaScript, Python, Go, Scala, Kotlin, React Native, and Android repositories. Veracode Azure DevOps Workflow Integration to automate repo scanning using user's pipelines. The auto-packager supports Java, .NET, JavaScript, Python, Go, Kotlin, and React Native projects. Veracode Scan for JetBrains to auto-package applications, scan, and remediate findings in JetBrains IDEs. Veracode Scan for VS Code to auto-package applications, scan, and remediate findings in VS Code. You can integrate the auto-packager with your local build environment or CI/CD. For example, to add auto-packaging to your build pipelines, you could add the CLI command veracode package to your development toolchains or build scripts. You might need to install one or more of the following tools in your environment: A build automation tool that defines build scripts or configurations that specify how to manage dependencies, compile source code, and package code as artifacts. A dependency management system to effectively handle project dependencies. A compiler that builds source code into executable code. If the auto-packager does not support specific versions, or it relies on a version supported by your packager manager, the Versions column shows Not applicable. LanguageVersionsPackage managers.NET.NET 6, 7, or 8. .NET Framework 4.6 - 4.8. Not supported: MAUIAllAndroidA JDK version that you have tested to build your project.GradleCOBOLCOBOL-74, COBOL-85, COBOL-2002Not ApplicableC/C++ LinuxCentOS and Red Hat Enterprise 5-9, openSUSE 10-15Not ApplicableC/C++ WindowsC/C++ (32-bit/64-bit)Not ApplicableDart and FlutterDart 3.3 and earlier / Flutter 3.19 and earlierPubGo1.14 - 1.22Go ModulesiOSNot applicableAllJava (select from the Package managers column)A JDK version that you have tested to build your project.Gradle, MavenJavaScript and TypeScriptNot applicableNPM, YarnKotlinA JDK version that you have tested to build your project.Gradle, MavenPerl5.xNot ApplicablePHPNot applicableComposerPythonNot applicablePip, Pipenv, setuptools, virtualenvReact NativeNot applicableNPM, Yarn, BowerRuby on RailsRuby 2.4 or greaterBundlerScalaA JDK version that you have tested to build your project.Gradle, Maven, sbt Under each supported language, the Veracode CLI commands and output examples demonstrate the packaging process when you run the veracode package command. You can use the auto-packager with various integrations, but the CLI output examples help you visualize the packaging process. All examples assume the location of the CLI executable is in your PATH. You might see different output in your environment. Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A supported version of .NET. PATH environment variable that points to the dotnet or msbuild command. Your projects must: Contain at least one syntactically correct .csproj file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Recursively searches your repo for all .csproj submodules. To publish an SDK-style project, runs the following command: dotnet publish -c Debug -p:UseAppHost=false -p:SatelliteResourceLanguages='en' -p:WasmEnableWebcil=false -p:BlazorEnableCompression=false To publish a .NET Framework project, runs a command similar to the following: msbuild Project.csproj /p:TargetFrameworkVersion=v4.5.2 /p:WebPublishMethod="FileSystem" /p:PublishProvider=FileSystem /p:LastUsedBuildConfiguration=Debug /p:LastUsedPlatform=Any CPU /p:SiteUrlToLaunchAfterPublish=false /p:LaunchSiteAfterPublish=false /p:ExcludeApp_Data=true /p:PrecompileBeforePublish=true /p:DeleteExistingFiles=true /p:EnableUpdateable=false /p:DebugSymbols=true /p:WDPMergeOption="CreateSeparateAssembly" /p:UseFixedNames=true /p:UseMerge=false /p:DeployOnBuild=true Filters out any test projects. Packages the published project and saves the artifacts of your packaged project in the specified --output location. veracode package --source path/to/project/bobs-used-bookstore-sample --output verascan --trust Packager initiated... Verifying source project language ... Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Data'. Publish successful. Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Web'. Publish successful. Project Bookstore.Web zipped and saved to: path\to\verascan\veracode-auto-pack-Bookstore.Web-dotnet.zip DotNet project Bookstore.Web JavaScript packaged to: path\to\verascan\veracode-auto-pack-Bookstore.Web-dotnet-js.zip Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Cdk'. Publish successful. Project Bookstore.Cdk zipped and saved to: path\to\verascan\veracode-auto-pack-Bookstore.Cdk-dotnet.zip Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Domain'. Publish successful. Successfully created 3 artifact(s). Created DotNet artifacts for DotNetPackager project. Total time taken to complete command: 11.656s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct Java or Kotlin version present in the environment for packaging the application. Correct Android SDK version present in the environment for packaging the application. Other dependencies installed based on the repository dependency. The auto-packager completes the following steps, as shown in the example command output. To build a Gradle project, runs the command gradlew clean build -x test Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/sunflower --output verascan --trust Packaging code for project sunflowe. Please wait; this may take a while... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/app-benchmark.apk. Copied artifact: path/to/verascan/app-debug.apk. Copied artifact: path/to/verascan/macrobenchmark-benchmark.apk. Successfully created 3 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 1m35.117s Before you can run the auto-packager, you must meet the following requirements: Your COBOL programs must be in UTF-8 encoded files with one of the following extensions: .cob, .cbl, .cobol, or .pco. Your COBOL copybooks must be in UTF-8 encoded .cpy files. Veracode recommends you include all copybooks to generate the best scan results. The auto-packager completes the following steps, as shown in the example command output. Finds all the files matching the required extensions and packages them in a ZIP archive (artifact). Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/EnterpriseCOBOLv6.3 --output verascan --trust Packaging code for project EnterpriseCOBOLv6.3. Please wait; this may take a while... Verifying source project language ... [GenericPackagerCobol] Packaging succeeded for the path path/to/project/EnterpriseCOBOLv6.3 Successfully created 1 artifact(s). Created Cobol artifacts for GenericPackagerCobol project. Total time taken to complete command: 3.802s Before you can run the auto-packager, you must meet the following requirements: All project files and libraries have been compiled with debug information defined in the packaging guidelines. Auto-packaging must run on supported Linux OS architecture and distribution. For efficient packaging, all binaries and libraries have been collected in a single folder. The auto-packager completes the following steps, as shown in the example command output. Detects a Veracode-supported Linux OS architecture. If it does not detect a supported architecture, the auto-packager throws an error and exits packaging. Detects a Veracode-supported Linux OS distribution. Searches the prebuilt binary directory to find scan-supported binary files, then archives them in a single artifact. veracode package --source path/to/project/CppProjectLibsAndExecutables --output verascan --trust Packaging code for project CppProjectLibsAndExecutables. Please wait; this may take a while... Verifying source project language ... C/CPP project CppProjectLibsAndExecutables packaged to: /path/to/verascan/veracode-auto-pack-CppProjectLibsAndExecutables-c_cpp.zip Successfully created 1 artifact(s). Created CPlusPlus artifacts for GenericPackagerCPP project. Total time taken to complete command: 37.257s Before you can run the auto-packager, you must meet the following requirements: The project must contain at least one .sln file that is configured to build at least one supported C++ project. A supported C++ project is defined by a .vcxproj file where the following are true: Defines a supported project configuration: Targets a supported platform (x64 or Win32) Builds a supported binary (ConfigurationType is Application or DynamicLibrary) Is not a test Native Unit Test project or Google Unit Test project. msbuild command is available in the environment. Code can compile without errors. The auto-packager completes the following steps, as shown in the example command output. Searches the project directories to find supported .sln files. The search stops at each directory level where it finds supported files. For each .sln file found: Determines the solution configuration to use to build the top-level projects. If available, it uses the first solution configuration listed in the solution that has a supported project platform for a top-level C++ project, configured as a debug build. Determines the supported top-level C++ projects for that solution configuration. A top-level C++ project is a C++ project that is not a dependency of any other project configured to build for that solution configuration. Builds each supported top-level C++ project using compiler and linker settings required for Veracode to analyze Windows C/C++ applications: <ItemDefinitionGroup> <ClCompile> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> <Optimization>Disabled</Optimization> <BasicRuntimeChecks>Default</BasicRuntimeChecks> <BufferSecurityCheck>false</BufferSecurityCheck> </ClCompile> <Link> <LinkIncremental>false</LinkIncremental> <GenerateDebugInformation>true</GenerateDebugInformation> <ProgramDatabaseFile>$(OutDir)$(TargetName).pdb</ProgramDatabaseFile> </Link> </ItemDefinitionGroup> Creates an archive for each solution named veracode-auto-pack-$(SolutionName)-msvc.zip. Each archive contains a $(ProjectName) directory with all .exe, .dll, and .pdb build artifacts for each top-level project build target of the solution. veracode package --source path/to/project/example-cpp-windows --output verascan --trust Packaging code for project example-cpp-windows. Please wait; this may take a while... Verifying source project language ... Packaging Windows C/C++ artifacts for WinCppPackager publish path 'C:\Users\...\AppData\Local\Temp\2766238912731991934'. MSBuild commands successfully completed. Windows solution WS_AllSource packaged to: path\to\verascan\veracode-auto-pack-WS_AllSource-msvc.zip Packaging Windows C/C++ artifacts for WinCppPackager publish path 'C:\Users\...\AppData\Local\Temp\7662002083651398436'. MSBuild commands successfully completed. Windows solution allPepPCIF packaged to: path\to\verascan\veracode-auto-pack-allPepPCIF-msvc.zip Successfully created 2 artifact(s). Created Windows C/C++ artifacts for WinCppPackager project. Total time taken to complete command: 3m38.473s Before you can run the auto-packager, you must meet the following requirements: To ensure that Flutter installs successfully and validates all platform tools, successfully run flutter doctor. To generate an iOS Archive file, the project must be able to run the command: flutter build ipa --debug To generate an Android APK file, the project must be able to run the command: flutter build apk --debug The auto-packager completes the following steps, as shown in the example command output. Gathers APK and IPA files. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/flutter-wonderous-app --output verascan --trust Packaging code for project flutter-wonderous-app. Please wait; this may take a while... Verifying source project language ... Copying artifacts for Dart Flutter for FlutterPackager project. Copied artifact: path/to/verascan/app-debug.apk. Successfully created 1 artifact(s). Created Dart artifacts for FlutterPackager project. Total time taken to complete command: 54.731s Before you can run the auto-packager, you must meet the following requirements: Your environment must have a supported version of Go. Your projects must: Support Go Modules. Contain a go.sum file and a go.mod file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package a project, including the source code and the vendor folder, runs the command go mod vendor. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/sftpgo --output verascan --trust Please ensure your project builds successfully without any errors. Packaging code for project sftpgo. Please wait; this may take a while... Verifying source project language ... Packaging GO artifacts for GoModulesPackager project 'sftpgo'. go mod vendor successful. Go project sftpgo packaged to: path/to/verascan/veracode-auto-pack-sftpgo-go.zip Successfully created 1 artifact(s). Created GoLang artifacts for GoModulesPackager project. Total time taken to complete command: 15.776s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Xcode and the xcodebuild command-line tool installed. gen-ir installed. For example: # Add the brew tap to your local machine brew tap veracode/tap # Install the tool brew install gen-ir pod installed, if your projects use CocoaPods or third party tools. Your projects must compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Checks that the podfile or podfile.lock files are present. Runs the command pod install. Checks that the .xcworkspace or .xcodeproj files are present. To build and package the project, runs: xcodebuild clean archive -PROJECT/WORKSPACE filePath -scheme SRCCLR_IOS_SCHEME -destination SRCCLR_IOS_DESTINATION -configuration SRCCLR_IOS_CONFIGURATION -archivePath projectName.xcarchive DEBUG_INFORMATION_FORMAT=dwarf-with-dsym ENABLE_BITCODE=NO The SRCCLR values are optional environment variables you can use to customize the xcodebuild archive command. Runs gen-ir on the artifact of your packaged project and the log files. Saves the artifact in the specified --output location. veracode package --source https://github.com/signalapp/Signal-iOS --type repo --output verascan --trust Packager initiated... Verifying source project language ... Packaging iOS artifacts for IOSPackager project 'MyProject'. iOS Project MyProject zipped and saved to: path/to/verascan/veracode-auto-pack-MyProject-ios-xcarchive.zip Successfully created 1 artifact(s). Created IOS artifacts for IOSPackager project. Total time taken to complete command: 9.001s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you tested to successfully compile your application. Access to a gradlew command that points to the correct JAVA_HOME directory. If gradlew is not available, ensure the correct Gradle version is installed. Your projects must: Have the correct build.gradle file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build the Gradle project and package it as a JAR file, runs the command gradlew clean build -x test. Copies the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-java-gradle --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/example-java-gradle-1.0-SNAPSHOT.jar. Successfully created 1 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 7.174s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you tested to successfully compile your application. Access to a mvn command that points to the correct JAVA_HOME directory. Your projects must: Have the correct pom.xml file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the Maven project, runs the command mvn clean package. Copies the artifact, such as JAR, WAR, EAR, of your packaged project to the specified --output location. veracode package --source path/to/project/example-java-maven --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for Maven project. Copied artifact: path/to/verascan/example-java-maven-1.0-SNAPSHOT.jar. Successfully created 1 artifact(s). Created Java artifacts for Maven project. Total time taken to complete command: 6.799s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The NPM or Yarn package manager installed. The correct Node, NPM, or Yarn version to package the project. Your projects must: Be able to resolve all dependencies with commands npm install or yarn install. Have the correct package.json file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the project, runs one of the following commands: For NPM, runs the command npm install. For Yarn, runs the command yarn install. Copies the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-javascript --output verascan --trust Packager initiated... Verifying source project language ... Packaging Javascript artifacts for NPM project. Project example-javascript packaged to path/to/veracsan/veracode-auto-pack-example-javascript-js.zip. Successfully created 1 artifact(s). Created Javascript artifacts for NPM project. Total time taken to complete command: 3.296s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The correct Kotlin version for your projects. The Maven or Gradle package manager installed. A Java version that your packager manager requires. Your projects must: Have the correct pom.xml, build.gradle, or build.gradle.kts file. Compile successfully without errors. The auto-packager completes the steps shown in the following example command output. Verifies that your project language is supported. Uses Gradle to builds and packages the project. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/kotlin-server-side-sample/gradle --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/demo-0.0.1-SNAPSHOT-plain.jar. Copied artifact: path/to/verascan/demo-0.0.1-SNAPSHOT.jar. Successfully created 2 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 8.632s Before you can run the auto-packager, you must meet the following requirements: Your Perl project must be a version 5.x Your project must contain at least one file with the following extensions: of .pl, .pm, .plx, .pl5, or .cgi The auto-packager completes the following steps, as shown in the example command output. Finds all the files matching the required extensions and packages them in a ZIP archive (artifact). Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/bugzilla --output verascan --trust Packaging code for project bugzilla. Please wait; this may take a while... Verifying source project language ... Packaging code for project bugzilla. Please wait; this may take a while... Verifying source project language ... [GenericPackagerPerl] Packaging succeeded for the path path/to/project/bugzilla. Successfully created 1 artifact(s). Created Perl artifacts for GenericPackagerPerl project. Total time taken to complete command: 9.965s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct PHP version for your projects. Composer dependency manager installed. Your projects must: Have the correct PHP composer.json file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the project source code and lock file with Composer, runs the command composer install. Saves the artifacts of your packaged project in the specified --output location. veracode package --source path/to/project/example-php --output verascan --trust Packager initiated... Validating output path ... Packaging PHP artifacts for Composer project. Project captainhook zipped and saved to path/to/verascan/veracode-auto-pack-captainhook-php.zip. Packaging PHP artifacts for Composer project. Project template-integration zipped and saved to path/to/verascan/veracode-auto-pack-template-integration-php.zip. Successfully created 2 artifact(s). Created PHP artifacts for Composer project. Total time taken to complete command: 3.62s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The correct pip and Python or pyenv version for packaging your project are installed. A package manager configuration file with the required settings to resolve all dependencies. Your projects must compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To resolve all third party dependencies and generate the lock file, PIP install, runs the command pip install -r requirements.txt. Packages the project source code, lock file, and vendor folder. Saves the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-python --output verascan --trust Packager initiated... Verifying source project language ... Packaging Python artifacts for PIP project. Project example-python zipped and saved to path/to/verascan/veracode-auto-pack-example-python-python.zip. Successfully created 1 artifact(s). Created Python artifacts for PIP project. Total time taken to complete command: 14.359s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct version of Node, NPM, or Yarn for your projects. NPM or Yarn installation resolves all dependencies. Have the correct package.json file. Package.json file has the React Native version as a dependency. The auto-packager completes the following steps, as shown in the example command output. For NPM applications, runs the npm install command. For Yarn applications, runs the yarn install command. For Expo build, runs the expo start command. veracode package --source path/to/project/example-javascript-yarn --output verascan --trust Packaging code for project example-javascript-yarn. Please wait; this may take a while... Verifying source project language ... Packaging Javascript artifacts for Yarn project. JavaScript project example-javascript-yarn packaged to: path/to/verascan/veracode-auto-pack-example-javascript-yarn-js.zip Successfully created 1 artifact(s). Created Javascript artifacts for Yarn project. Total time taken to complete command: 1m9.13s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The Bundler package manager installed with the correct Ruby version. The Veracode packager gemfile installed. This gemfile handles pre-processing of Rails projects for Static Analysis. The ability to run the command bundle install Your projects must compile successfully without errors. Optionally, to test your configured environment, run the command rails server. The auto-packager completes the following steps, as shown in the example command output. To configure the vendor path, runs the command bundle config --local path vendor. Runs the command bundle install without development and test: bundle install --without development test. To check for the Rails installation, runs the command bundle info rails. If Rails is not installed, the auto-packager assumes it is not a Rails project and exits. To install the Veracode packager gem, runs the command bundle add veracode. To package your project using the Veracode packager gem, runs the command bundle exec veracode. Saves the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/rails --output verascan --trust Packager initialized... Verifying source project language ... Packaging Ruby artifacts for RubyPackager project 'veracode-rails-20240321225855.zip'. ArtifactPath: /rails/tmp/veracode-rails-20240321225855.zip ValidatedSource: /rails ValidatedOutput: /rails/verascan Project name: rails 44824469 bytes written to destination file. Path: /rails/verascan/rails.zip temporary zip file deleted. Path: /rails/tmp/veracode-rails-20240321225855.zip Successfully created 1 artifact(s). Created Ruby artifacts for RubyPackager project. Total time taken to complete command: 1m27.428s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you have tested to successfully package your application. The Maven, Gradle, or sbt package manager installed with the correct Java version. Your projects must: Have the correct pom.xml, build.gradle, or build.sbt file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Runs the sbt assembly command sbt clean assembly. This command assists in creating a JAR file with dependencies in non-Spring projects, which improves SCA scanning. If sbt assembly fails, runs the sbt package command sbt clean package. Copies the artifacts of your packaged application to the specified --output location. veracode package --source path/to/project/packSample/zio-quill --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for SbtPackager project. Copied artifact: path/to/verascan/quill-cassandra_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-pekko_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen-jdbc_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen-tests_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-core_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-doobie_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-engine_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-h2_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-mysql_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-oracle_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-postgres_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-sqlite_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-sqlserver_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-orientdb_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-spark_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-sql_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-sql-test_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-util_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/zio-quill-docs_2.12-4.8.2+3-d2965801-SNAPSHOT.jar. Successfully created 28 artifact(s). Created Java artifacts for SbtPackager project. Total time taken to complete command: 45.428s
8585
dbpedia
2
47
https://nix.dev/tutorials/packaging-existing-software.html
en
Packaging existing software with Nix — nix.dev documentation
[ "https://nix.dev/_static/img/nix.svg" ]
[]
[]
[ "Nix", "packaging" ]
null
[]
null
Packaging Existing Software With Nix
en
../_static/favicon.png
https://nix.dev/tutorials/packaging-existing-software.html
Your first package# Note A package is a loosely defined concept that refers to either a collection of files and other data, or a Nix expression representing such a collection before it comes into being. Packages in Nixpkgs have a conventional structure, allowing them to be discovered in searches and composed in environments alongside other packages. For the purposes of this tutorial, a “package” is a Nix language function that will evaluate to a derivation. It will enable you or others to produce an artifact for practical use, as a consequence of having “packaged existing software with Nix”. To start, consider this skeleton derivation: 1{ stdenv }: 2 3stdenv.mkDerivation { } This is a function which takes an attribute set containing stdenv, and produces a derivation (which currently does nothing). A package function# GNU Hello is an implementation of the “hello world” program, with source code accessible from the GNU Project’s FTP server. To begin, add a pname attribute to the set passed to mkDerivation. Every package needs a name and a version, and Nix will throw error: derivation name missing without. stdenv.mkDerivation { + pname = "hello"; + version = "2.12.1"; Next, you will declare a dependency on the latest version of hello, and instruct Nix to use fetchzip to download the source code archive. Note fetchzip can fetch more archives than just zip files! The hash cannot be known until after the archive has been downloaded and unpacked. Nix will complain if the hash supplied to fetchzip is incorrect. Set the hash attribute to an empty string and then use the resulting error message to determine the correct hash: 1# hello.nix 2{ 3 stdenv, 4 fetchzip, 5}: 6 7stdenv.mkDerivation { 8 pname = "hello"; 9 version = "2.12.1"; 10 11 src = fetchzip { 12 url = "https://ftp.gnu.org/gnu/hello/hello-2.12.1.tar.gz"; 13 sha256 = ""; 14 }; 15} Save this file to hello.nix and run nix-build to observe your first build failure: $ nix-build hello.nix error: cannot evaluate a function that has an argument without a value ('stdenv') Nix attempted to evaluate a function as a top level expression; in this case it must have its arguments supplied either by default values, or passed explicitly with '--arg' or '--argstr'. See https://nix.dev/manual/nix/stable/language/constructs.html#functions. at /home/nix-user/hello.nix:3:3: 2| { 3| stdenv, | ^ 4| fetchzip, Problem: the expression in hello.nix is a function, which only produces its intended output if it is passed the correct arguments. Building with nix-build# stdenv is available from nixpkgs, which must be imported with another Nix expression in order to pass it as an argument to this derivation. The recommended way to do this is to create a default.nix file in the same directory as hello.nix, with the following contents: 1# default.nix 2let 3 nixpkgs = fetchTarball "https://github.com/NixOS/nixpkgs/tarball/nixos-24.05"; 4 pkgs = import nixpkgs { config = {}; overlays = []; }; 5in 6{ 7 hello = pkgs.callPackage ./hello.nix { }; 8} This allows you to run nix-build -A hello to realize the derivation in hello.nix, similar to the current convention used in Nixpkgs. Note callPackage automatically passes attributes from pkgs to the given function, if they match attributes required by that function’s argument attribute set. In this case, callPackage will supply stdenv, and fetchzip to the function defined in hello.nix. The tutorial Package parameters and overrides with callPackage goes into detail on how this works. Now run the nix-build command with the new argument: $ nix-build -A hello error: ... … while evaluating attribute 'src' of derivation 'hello' at /home/nix-user/hello.nix:9:3: 8| 9| src = fetchzip { | ^ 10| url = "https://ftp.gnu.org/gnu/hello/hello-2.12.1.tar.gz"; error: hash mismatch in file downloaded from 'https://ftp.gnu.org/gnu/hello/hello-2.12.1.tar.gz': specified: sha256:0000000000000000000000000000000000000000000000000000 got: sha256:0xw6cr5jgi1ir13q6apvrivwmmpr5j8vbymp0x6ll0kcv6366hnn error: 1 dependencies of derivation '/nix/store/8l961ay0q0ydfsgby0ngz6nmkchjqd50-hello-2.12.1.drv' failed to build Finding the file hash# As expected, the incorrect file hash caused an error, and Nix helpfully provided the correct one. In hello.nix, replace the empty string with the correct hash: 1# hello.nix 2{ 3 stdenv, 4 fetchzip, 5}: 6 7stdenv.mkDerivation { 8 pname = "hello"; 9 version = "2.12.1"; 10 11 src = fetchzip { 12 url = "https://ftp.gnu.org/gnu/hello/hello-2.12.1.tar.gz"; 13 sha256 = "0xw6cr5jgi1ir13q6apvrivwmmpr5j8vbymp0x6ll0kcv6366hnn"; 14 }; 15} Now run the previous command again: $ nix-build -A hello this derivation will be built: /nix/store/rbq37s3r76rr77c7d8x8px7z04kw2mk7-hello.drv building '/nix/store/rbq37s3r76rr77c7d8x8px7z04kw2mk7-hello.drv'... ... configuring ... configure: creating ./config.status config.status: creating Makefile ... building ... <many more lines omitted> Great news: the derivation built successfully! The console output shows that configure was called, which produced a Makefile that was then used to build the project. It wasn’t necessary to write any build instructions in this case because the stdenv build system is based on GNU Autoconf, which automatically detected the structure of the project directory. Build result# Check your working directory for the result: $ ls default.nix hello.nix result This result is a symbolic link to a Nix store location containing the built binary; you can call ./result/bin/hello to execute this program: $ ./result/bin/hello Hello, world! Congratulations, you have successfully packaged your first program with Nix! Next, you’ll package another piece of software with external-to-stdenv dependencies that present new challenges, requiring you to make use of more mkDerivation features. A package with dependencies# Now you will package a somewhat more complicated program, icat, which allows you to render images in your terminal. Change the default.nix from the previous section by adding a new attribute for icat: 1# default.nix 2let 3 nixpkgs = fetchTarball "https://github.com/NixOS/nixpkgs/tarball/nixos-24.05"; 4 pkgs = import nixpkgs { config = {}; overlays = []; }; 5in 6{ 7 hello = pkgs.callPackage ./hello.nix { }; 8 icat = pkgs.callPackage ./icat.nix { }; 9} Copy hello.nix to a new file icat.nix, and update the pname and version attributes in that file: 1# icat.nix 2{ 3 stdenv, 4 fetchzip, 5}: 6 7stdenv.mkDerivation { 8 pname = "icat"; 9 version = "v0.5"; 10 11 src = fetchzip { 12 # ... 13 }; 14} Now to download the source code. icat’s upstream repository is hosted on GitHub, so you should replace the previous source fetcher. This time you will use fetchFromGitHub instead of fetchzip, by updating the argument attribute set to the function accordingly: 1# icat.nix 2{ 3 stdenv, 4 fetchFromGitHub, 5}: 6 7stdenv.mkDerivation { 8 pname = "icat"; 9 version = "v0.5"; 10 11 src = fetchFromGitHub { 12 # ... 13 }; 14} Fetching source from GitHub# While fetchzip required url and sha256 arguments, more are needed for fetchFromGitHub. The source URL is https://github.com/atextor/icat, which already gives the first two arguments: owner: the name of the account controlling the repository owner = "atextor"; repo: the name of the repository to fetch repo = "icat"; Navigate to the project’s Tags page to find a suitable Git revision (rev), such as the Git commit hash or tag (e.g. v1.0) corresponding to the release you want to fetch. In this case, the latest release tag is v0.5. As in the hello example, a hash must also be supplied. This time, instead of using the empty string and letting nix-build report the correct one in an error, you can fetch the correct hash in the first place with the nix-prefetch-url command. You need the SHA256 hash of the contents of the tarball (as opposed to the hash of the tarball file itself). Therefore pass the --unpack and --type sha256 arguments: $ nix-prefetch-url --unpack https://github.com/atextor/icat/archive/refs/tags/v0.5.tar.gz --type sha256 path is '/nix/store/p8jl1jlqxcsc7ryiazbpm7c1mqb6848b-v0.5.tar.gz' 0wyy2ksxp95vnh71ybj1bbmqd5ggp13x3mk37pzr99ljs9awy8ka Set the correct hash for fetchFromGitHub: 1# icat.nix 2{ 3 stdenv, 4 fetchFromGitHub, 5}: 6 7stdenv.mkDerivation { 8 pname = "icat"; 9 version = "v0.5"; 10 11 src = fetchFromGitHub { 12 owner = "atextor"; 13 repo = "icat"; 14 rev = "v0.5"; 15 sha256 = "0wyy2ksxp95vnh71ybj1bbmqd5ggp13x3mk37pzr99ljs9awy8ka"; 16 }; 17} Missing dependencies# Running nix-build with the new icat attribute, an entirely new issue is reported: $ nix-build -A icat these 2 derivations will be built: /nix/store/86q9x927hsyyzfr4lcqirmsbimysi6mb-source.drv /nix/store/l5wz9inkvkf0qhl8kpl39vpg2xfm2qpy-icat.drv ... error: builder for '/nix/store/l5wz9inkvkf0qhl8kpl39vpg2xfm2qpy-icat.drv' failed with exit code 2; last 10 log lines: > from /nix/store/hkj250rjsvxcbr31fr1v81cv88cdfp4l-glibc-2.37-8-dev/include/stdio.h:27, > from icat.c:31: > /nix/store/hkj250rjsvxcbr31fr1v81cv88cdfp4l-glibc-2.37-8-dev/include/features.h:195:3: warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [8;;https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wcpp-Wcpp8;;] > 195 | # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" > | ^~~~~~~ > icat.c:39:10: fatal error: Imlib2.h: No such file or directory > 39 | #include <Imlib2.h> > | ^~~~~~~~~~ > compilation terminated. > make: *** [Makefile:16: icat.o] Error 1 For full logs, run 'nix log /nix/store/l5wz9inkvkf0qhl8kpl39vpg2xfm2qpy-icat.drv'. A compiler error! The icat source was pulled from GitHub, and Nix tried to build what it found, but compilation failed due to a missing dependency: the imlib2 header. If you search for imlib2 on search.nixos.org, you’ll find that imlib2 is already in Nixpkgs. Add this package to your build environment by adding imlib2 to the arguments of the function in icat.nix. Then add the argument’s value imlib2 to the list of buildInputs in stdenv.mkDerivation: 1# icat.nix 2{ 3 stdenv, 4 fetchFromGitHub, 5 imlib2, 6}: 7 8stdenv.mkDerivation { 9 pname = "icat"; 10 version = "v0.5"; 11 12 src = fetchFromGitHub { 13 owner = "atextor"; 14 repo = "icat"; 15 rev = "v0.5"; 16 sha256 = "0wyy2ksxp95vnh71ybj1bbmqd5ggp13x3mk37pzr99ljs9awy8ka"; 17 }; 18 19 buildInputs = [ imlib2 ]; 20} Run nix-build -A icat again and you’ll encounter another error, but compilation proceeds further this time: $ nix-build -A icat this derivation will be built: /nix/store/bw2d4rp2k1l5rg49hds199ma2mz36x47-icat.drv ... error: builder for '/nix/store/bw2d4rp2k1l5rg49hds199ma2mz36x47-icat.drv' failed with exit code 2; last 10 log lines: > from icat.c:31: > /nix/store/hkj250rjsvxcbr31fr1v81cv88cdfp4l-glibc-2.37-8-dev/include/features.h:195:3: warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [8;;https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wcpp-Wcpp8;;] > 195 | # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" > | ^~~~~~~ > In file included from icat.c:39: > /nix/store/4fvrh0sjc8sbkbqda7dfsh7q0gxmnh9p-imlib2-1.11.1-dev/include/Imlib2.h:45:10: fatal error: X11/Xlib.h: No such file or directory > 45 | #include <X11/Xlib.h> > | ^~~~~~~~~~~~ > compilation terminated. > make: *** [Makefile:16: icat.o] Error 1 For full logs, run 'nix log /nix/store/bw2d4rp2k1l5rg49hds199ma2mz36x47-icat.drv'. You can see a few warnings which should be corrected in the upstream code. But the important bit for this tutorial is fatal error: X11/Xlib.h: No such file or directory: another dependency is missing. Finding packages# Determining from where to source a dependency is currently somewhat involved, because package names don’t always correspond to library or program names. You will need the Xlib.h headers from the X11 C package, the Nixpkgs derivation for which is libX11, available in the xorg package set. There are multiple ways to figure this out: search.nixos.org# Tip The easiest way to find what you need is on search.nixos.org/packages. Unfortunately in this case, searching for x11 produces too many irrelevant results because X11 is ubiquitous. On the left side bar there is a list package sets, and selecting xorg shows something promising. In case all else fails, it helps to become familiar with searching the Nixpkgs source code for keywords. Local code search# To find name assignments in the source, search for "<keyword> =". For example, these are the search results for "x11 = " or "libx11 =" on Github. Or fetch a clone of the repository and search the code locally. Start a shell that makes the required tools available – git for version control, and rg for code search (provided by the ripgrep package): $ nix-shell -p git ripgrep [nix-shell:~]$ The Nixpkgs repository is huge. Only clone the latest revision to avoid waiting a long time for a full clone: [nix-shell:~]$ git clone https://github.com/NixOS/nixpkgs --depth1 ... [nix-shell:~]$ cd nixpkgs/ To narrow down results, only search the pkgs subdirectory, which holds all the package recipes: [nix-shell:~]$ rg"x11 =" pkgs pkgs/tools/X11/primus/default.nix 21: primus = if useNvidia then primusLib_ else primusLib_.override { nvidia_x11 = null; }; 22: primus_i686 = if useNvidia then primusLib_i686_ else primusLib_i686_.override { nvidia_x11 = null; }; pkgs/applications/graphics/imv/default.nix 38: x11 = [ libGLU xorg.libxcb xorg.libX11 ]; pkgs/tools/X11/primus/lib.nix 14: if nvidia_x11 == null then libGL pkgs/top-level/linux-kernels.nix 573: ati_drivers_x11 = throw "ati drivers are no longer supported by any kernel >=4.1"; # added 2021-05-18; ... <a lot more results> Since rg is case sensitive by default, Add -i to make sure you don’t miss anything: [nix-shell:~]$ rg -i "libx11 =" pkgs pkgs/applications/version-management/monotone-viz/graphviz-2.0.nix 55: ++ lib.optional (libX11 == null) "--without-x"; pkgs/top-level/all-packages.nix 14191: libX11 = xorg.libX11; pkgs/servers/x11/xorg/default.nix 1119: libX11 = callPackage ({ stdenv, pkg-config, fetchurl, xorgproto, libpthreadstubs, libxcb, xtrans, testers }: stdenv.mkDerivation (finalAttrs: { pkgs/servers/x11/xorg/overrides.nix 147: libX11 = super.libX11.overrideAttrs (attrs: { Local derivation search# To search derivations on the command line, use nix-locate from the nix-index. Adding package sets as dependencies# Add this to your derivation’s input attribute set and to buildInputs: 1# icat.nix 2{ 3 stdenv, 4 fetchFromGitHub, 5 imlib2, 6 xorg, 7}: 8 9stdenv.mkDerivation { 10 pname = "icat"; 11 version = "v0.5"; 12 13 src = fetchFromGitHub { 14 owner = "atextor"; 15 repo = "icat"; 16 rev = "v0.5"; 17 sha256 = "0wyy2ksxp95vnh71ybj1bbmqd5ggp13x3mk37pzr99ljs9awy8ka"; 18 }; 19 20 buildInputs = [ imlib2 xorg.libX11 ]; 21} Note Only add the top-level xorg derivation to the input attrset, rather than the full xorg.libX11, as the latter would cause a syntax error. Because Nix is lazily-evaluated, using xorg.libX11 means that we only include the libX11 attribute and the derivation doesn’t actually include all of xorg into the build context. Fixing build failures# Run the last command again: $ nix-build -A icat this derivation will be built: /nix/store/x1d79ld8jxqdla5zw2b47d2sl87mf56k-icat.drv ... error: builder for '/nix/store/x1d79ld8jxqdla5zw2b47d2sl87mf56k-icat.drv' failed with exit code 2; last 10 log lines: > 195 | # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" > | ^~~~~~~ > icat.c: In function 'main': > icat.c:319:33: warning: ignoring return value of 'write' declared with attribute 'warn_unused_result' [8;;https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wunused-result-Wunused-result8;;] > 319 | write(tempfile, &buf, 1); > | ^~~~~~~~~~~~~~~~~~~~~~~~ > gcc -o icat icat.o -lImlib2 > installing > install flags: SHELL=/nix/store/8fv91097mbh5049i9rglc73dx6kjg3qk-bash-5.2-p15/bin/bash install > make: *** No rule to make target 'install'. Stop. For full logs, run 'nix log /nix/store/x1d79ld8jxqdla5zw2b47d2sl87mf56k-icat.drv'. The missing dependency error is solved, but there is now another problem: make: *** No rule to make target 'install'. Stop. installPhase# stdenv is automatically working with the Makefile that comes with icat. The console output shows that configure and make are executed without issue, so the icat binary is compiling successfully. The failure occurs when the stdenv attempts to run make install. The Makefile included in the project happens to lack an install target. The README in the icat repository only mentions using make to build the tool, leaving the installation step up to users. To add this step to your derivation, use the installPhase attribute. It contains a list of command strings that are executed to perform the installation. Because make finishes successfully, the icat executable is available in the build directory. You only need to copy it from there to the output directory. In Nix, the output directory is stored in the $out variable. That variable is accessible in the derivation’s builder execution environment. Create a bin directory within the $out directory and copy the icat binary there: 1# icat.nix 2{ 3 stdenv, 4 fetchFromGitHub, 5 imlib2, 6 xorg, 7}: 8 9stdenv.mkDerivation { 10 pname = "icat"; 11 version = "v0.5"; 12 13 src = fetchFromGitHub { 14 owner = "atextor"; 15 repo = "icat"; 16 rev = "v0.5"; 17 sha256 = "0wyy2ksxp95vnh71ybj1bbmqd5ggp13x3mk37pzr99ljs9awy8ka"; 18 }; 19 20 buildInputs = [ imlib2 xorg.libX11 ]; 21 22 installPhase = '' 23 mkdir -p $out/bin 24 cp icat $out/bin 25 ''; 26} Phases and hooks# Nixpkgs stdenv.mkDerivation derivations are separated into phases. Each is intended to control some aspect of the build process. Earlier you observed how stdenv.mkDerivation expected the project’s Makefile to have an install target, and failed when it didn’t. To fix this, you defined a custom installPhase containing instructions for copying the icat binary to the correct output location, in effect installing it. Up to that point, the stdenv.mkDerivation automatically determined the buildPhase information for the icat package. During derivation realisation, there are a number of shell functions (“hooks”, in Nixpkgs) which may execute in each derivation phase. Hooks do things like set variables, source files, create directories, and so on. These are specific to each phase, and run both before and after that phase’s execution. They modify the build environment for common operations during the build. It’s good practice when packaging software with Nix to include calls to these hooks in the derivation phases you define, even when you don’t make direct use of them. This facilitates easy overriding of specific parts of the derivation later. And it keeps the code tidy and makes it easier to read. Adjust your installPhase to call the appropriate hooks: 1# icat.nix 2 3# ... 4 5 installPhase = '' 6 runHook preInstall 7 mkdir -p $out/bin 8 cp icat $out/bin 9 runHook postInstall 10 ''; 11 12# ...
8585
dbpedia
1
7
https://ask.replit.com/t/python-problem-with-new-template/61456
en
Python: Problem with new template
https://global.discourse…2_2_1024x497.png
https://global.discourse…2_2_1024x497.png
[ "https://global.discourse-cdn.com/business7/uploads/replitteams/optimized/3X/2/7/27a4dcb5bc739ac03552834983bfeb0ab9c9c472_2_517x251.png", "https://emoji.discourse-cdn.com/twitter/frowning_face.png?v=12", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://avatars.discourse-cdn.com/v4/letter/n/e9c0ed/48.png", "https://avatars.discourse-cdn.com/v4/letter/t/e68b1a/48.png" ]
[]
[]
[ "" ]
null
[ "system Closed" ]
2023-09-07T16:05:56+00:00
Problem description: After transferring an old Python repl to the new template (cloning from Github repository into a new repl) I’m running into a somewhat old problem again: When I hit the Run button poetry starts to &hellip;
en
https://global.discourse…7693_2_32x32.png
Replit Ask
https://ask.replit.com/t/python-problem-with-new-template/61456
Problem description: After transferring an old Python repl to the new template (cloning from Github repository into a new repl) I’m running into a somewhat old problem again: When I hit the Run button poetry starts to install a package gardenlinux: After the successful installation nothing happens: The program doesn’t run and the the Run button stays activated, ie. it’s actually a Stop button. There’s also a huge CPU utilisation. Running the file from the shell works fine. As with the old problem, the strange thing is that the repl is a pure Python repl, so no installation of packages is needed. And I don’t even know what gardenlinux does. Unfortunately, the old solution doesn’t work. Expected behavior: Hitting Run just runs the selected file (current.py). No installation of the gardenlinux package. Actual behavior: See above. Steps to reproduce: I don’t know how to replicate except trying to run the repl in question. Bug appears at this link: https://replit.com/@TimsbimPython/Morsels Browser: Safari OS: MacOS Device (Android, iOS, n/a leave blank): MacBook Pro Plan (Free, Hacker, Pro Plan): Teams Pro Hey all! I was able to reproduce the issue and let the team know. I believe this is to do with our new Modules system that encapsulates configuration for languages such as Python onto our own infrastructure. This seems to prevent users from configuring the packager in the .replit file. I will follow up once I have an update.
8585
dbpedia
3
32
https://www.jetbrains.com/help/go/creating-and-optimizing-imports.html
en
Auto import | GoLand
https://resources.jetbra…meta/preview.png
https://resources.jetbra…meta/preview.png
[ "https://resources.jetbrains.com/help/img/idea/2024.2/go_auto_import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/go_add_imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.quickfixBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.more.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/go_auto_import_popups_disabled.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.general.add.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/go_exclude_from_auto_import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.codeInsight.intentionBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/go_optimize_imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.general.settings.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/go_optimize_imports_before_commit.png", "https://resources.jetbrains.com/help/img/idea/2024.2/go_reformat_file_dialog.png", "https://resources.jetbrains.com/help/img/idea/2024.2/go_goimports_local_grouping.png" ]
[]
[]
[ "" ]
null
[]
null
Basic procedures to create and optimize imports in GoLand. Learn more how to import the missing import or XML namespace.
en
https://jetbrains.com/ap…e-touch-icon.png
GoLand Help
https://www.jetbrains.com/help/go/creating-and-optimizing-imports.html
Hover over the inspection widget in the top-right corner of the editor, click , and disable the Show Auto-Import Tooltip option. The Show Auto-Import Tooltip option toggles the auto-import feature. The setting does not influence the auto import functionality when you select constraints from the code completion list. In the Exclude from auto-import and completion section, click Alt+Insert, and specify a class or a package that you want to exclude. You can also select whether you want to exclude items from the current project or from all projects (globally). (If you've selected a directory) Choose whether you want to optimize imports in all files in the directory, or only in locally modified files (if your project is under version control), and click Run.
8585
dbpedia
2
91
https://docs.snowflake.com/en/user-guide/snowsql-install-config
en
Snowflake Documentation
[ "https://docs.snowflake.com/en/_images/ui-navigation-help-icon.svg" ]
[]
[]
[ "" ]
null
[]
null
en
data:image/svg+xml;base64,PHN2ZyBmaWxsPSJub25lIiBoZWlnaHQ9IjE1MCIgdmlld0JveD0iMCAwIDE1MCAxNTAiIHdpZHRoPSIxNTAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiPjxjbGlwUGF0aCBpZD0iYSI+PHBhdGggZD0ibTAgMGgxNTB2MTUwaC0xNTB6Ii8+PC9jbGlwUGF0aD48ZyBjbGlwLXBhdGg9InVybCgjYSkiPjxwYXRoIGQ9Im0xNTAgMGgtMTUwdjE1MGgxNTB6IiBmaWxsPSIjMjliNWU4Ii8+PHBhdGggY2xpcC1ydWxlPSJldmVub2RkIiBkPSJtMTMwLjM3NSA2Ny4yNTMyLTE0LjA5NiA4LjEyMjEgMTQuMDk2IDguMDUzNmMuODQzLjQ4NzIgMS41ODEgMS4xMzU3IDIuMTc0IDEuOTA4NC41OTIuNzcyNyAxLjAyNiAxLjY1NDUgMS4yNzggMi41OTUxLjI1MS45NDA2LjMxNSAxLjkyMTUuMTg4IDIuODg2Ny0uMTI4Ljk2NTItLjQ0NCAxLjg5NTktLjkzMSAyLjczODgtLjQ4OC44NDMtMS4xMzYgMS41ODE3LTEuOTA5IDIuMTc0cy0xLjY1NCAxLjAyNjctMi41OTUgMS4yNzgyLTEuOTIxLjMxNTQtMi44ODcuMTg3OGMtLjk2NS0uMTI3NS0xLjg5Ni0uNDQ0LTIuNzM5LS45MzEybC0yNS4yNTU5LTE0LjU0OTZjLTEuMTU3NS0uNjY5Ni0yLjExMjgtMS42Mzk0LTIuNzY1LTIuODA2OC0uNjUyMy0xLjE2NzQtLjk3NzMtMi40ODkzLS45NDA4LTMuODI2LjAxOTgtLjU3OTIuMTA4OS0xLjE1MzguMjY1My0xLjcxMTcuNTE3LTEuODcxMSAxLjc0NTgtMy40NjU0IDMuNDIzNC00LjQ0MTlsMjUuMjU2LTE0LjQ5ODNjLjg0Ny0uNDg5MiAxLjc4Mi0uODA2NyAyLjc1MS0uOTM0My45Ny0uMTI3NSAxLjk1NS0uMDYyNyAyLjg5OS4xOTA5Ljk0NS4yNTM2IDEuODMuNjkwOSAyLjYwNSAxLjI4Ny43NzUuNTk2IDEuNDI1IDEuMzM5MSAxLjkxMyAyLjE4NjYuNDkuODQwMi44MDggMS43NjkzLjkzNSAyLjczMzQuMTI4Ljk2NC4wNjMgMS45NDM5LS4xOTEgMi44ODI2LS4yNTQuOTM4OC0uNjkyIDEuODE3Ny0xLjI4OCAyLjU4NTktLjU5Ny43NjgxLTEuMzQgMS40MTAxLTIuMTg2IDEuODg4N3ptLTEzLjM0MyAzOS4zNjk4LTI1LjI0NzgtMTQuNTI0M2MtMS4xMjk2LS42NDgzLTIuNDA5MS0uOTg5Ni0zLjcxMTUtLjk5MDJzLTIuNTgyMy4zMzk3LTMuNzEyNC45ODdjLTEuMTMwMS42NDcyLTIuMDcxMiAxLjU3OS0yLjcyOTcgMi43MDI2LS42NTg1IDEuMTIzNy0xLjAxMTUgMi40MDAxLTEuMDIzOSAzLjcwMjR2MjkuMDEzNWMuMDY1NCAxLjkyOC44NzcyIDMuNzU1IDIuMjY0IDUuMDk1IDEuMzg2OSAxLjM0MSAzLjI0MDMgMi4wOSA1LjE2OTEgMi4wOSAxLjkyODkgMCAzLjc4MjMtLjc0OSA1LjE2OTEtMi4wOSAxLjM4NjgtMS4zNCAyLjE5ODYtMy4xNjcgMi4yNjQtNS4wOTV2LTE2LjI2MWwxNC4xMzAxIDguMTIyYy44NDMuNDg4IDEuNzc0LjgwNSAyLjczOS45MzMuOTY2LjEyOCAxLjk0Ny4wNjQgMi44ODgtLjE4N3MxLjgyMy0uNjg1IDIuNTk2LTEuMjc3Yy43NzQtLjU5MiAxLjQyMy0xLjMzMSAxLjkxLTIuMTc0LjQ4OC0uODQzLjgwNS0xLjc3My45MzMtMi43MzkuMTI4LS45NjUuMDY1LTEuOTQ2LS4xODYtMi44ODdzLS42ODUtMS44MjQtMS4yNzctMi41OTctMS4zMzEtMS40MjItMi4xNzQtMS45MXptLTI5LjA5OTItMjguMzgwNi0xMC41MjcgMTAuMzk4NmMtLjM2MDEuMzM0OS0uODI2OS41MzE5LTEuMzE4LjU1NjNoLTMuMDg5N2MtLjQ5LS4wMjk0LS45NTUxLS4yMjU3LTEuMzE4LS41NTYzbC0xMC40NjcxLTEwLjQzMjljLS4zMjc0LS4zNTczLS41MjEtLjgxNy0uNTQ3OC0xLjMwMDl2LTMuMDgxYy4wMjgzLS40ODYxLjIyMTUtLjk0ODEuNTQ3OC0xLjMwOTVsMTAuNDY3MS0xMC40MzI5Yy4zNjM3LS4zMjgxLjgyOS0uNTIxNCAxLjMxOC0uNTQ3N2gzLjA4OTdjLjQ4OTkuMDIyOC45NTYyLjIxNjYgMS4zMTguNTQ3N2wxMC40OTI4IDEwLjQzMjljLjMyMzcuMzYyMy41MTQuODI0My41MzkyIDEuMzA5NXYzLjA4MWMtLjAyMzcuNDgzLS4yMTQzLjk0MjgtLjUzOTIgMS4zMDA5em0tOC4zODc0LTIuODkyOGMtLjA0MDQtLjQ5OS0uMjQ4NS0uOTY5Ni0uNTkwNS0xLjMzNTJsLTMuMDM4My0zLjAwNGMtLjM2NDEtLjMyNzUtLjgyOTEtLjUyMDctMS4zMTgtLjU0NzhoLS4xMTEzYy0uNDg2Ny4wMjU1LS45NDk1LjIxOTEtMS4zMDk0LjU0NzhsLTMuMDM4MyAzLjAwNGMtLjMyNjIuMzY3Ni0uNTE2NS44MzU4LS41MzkyIDEuMzI2NnYuMTExM2MuMDIxNS40ODM0LjIxMjQuOTQzOS41MzkyIDEuMzAwOWwzLjA1NTQgMi45OTU1Yy4zNjA3LjMyNzQuODIzLjUyMDggMS4zMDk0LjU0NzdoLjExMTNjLjQ4ODktLjAyNy45NTM5LS4yMjAzIDEuMzE4LS41NDc3bDMuMDM4My0zLjAyMTJjLjMyODItLjM1NzYuNTI0NC0uODE2Ni41NTYzLTEuMzAwOXptLTQ3LjQ5MTQtMzEuMjIxNyAyNS4yNTYzIDE0LjQ4MTFjMS4xMzA4LjY0OCAyLjQxMTUuOTg5IDMuNzE0OS45ODg5IDEuMzAzMyAwIDIuNTg0LS4zNDA5IDMuNzE0OC0uOTg5IDEuMTMwOS0uNjQ4IDIuMDcyNS0xLjU4MDYgMi43MzE0LTIuNzA1MS42NTktMS4xMjQ1IDEuMDEyMy0yLjQwMTggMS4wMjQ5LTMuNzA1MXYtMjguOTg3OWMtLjA2NTQtMS45Mjc3LS44NzcyLTMuNzU0Ni0yLjI2NC01LjA5NTJzLTMuMjQwMi0yLjA4OTktNS4xNjkxLTIuMDg5OWMtMS45Mjg4IDAtMy43ODIyLjc0OTMtNS4xNjkgMi4wODk5LTEuMzg2OSAxLjM0MDYtMi4xOTg3IDMuMTY3NS0yLjI2NDEgNS4wOTUydjE2LjI2MTNsLTE0LjE0NzMtOC4xMzA3Yy0uODQyOS0uNDg3Ny0xLjc3MzctLjgwNDctMi43MzkyLS45MzI4LS45NjU0LS4xMjgxLTEuOTQ2Ni0uMDY0Ny0yLjg4NzYuMTg2NHMtMS44MjMyLjY4NTItMi41OTY1IDEuMjc3M2MtLjc3MzIuNTkyMS0xLjQyMjIgMS4zMzA3LTEuOTEgMi4xNzM2LS40ODc4Ljg0My0uODA0OCAxLjc3MzgtLjkzMjkgMi43MzkyLS4xMjguOTY1NS0uMDY0NyAxLjk0NjcuMTg2NCAyLjg4NzYuMjUxMi45NDEuNjg1MiAxLjgyMzMgMS4yNzczIDIuNTk2NS41OTIxLjc3MzMgMS4zMzA3IDEuNDIyMyAyLjE3MzcgMS45MTAxem01NS40MjUyIDE1LjQ3MzljMS40OTE0LjExOCAyLjk4MzYtLjIxOTMgNC4yNzkzLS45NjcxbDI1LjI0NzUtMTQuNTQ5NmMuODQzLS40ODc3IDEuNTgyLTEuMTM2OCAyLjE3NC0xLjkxczEuMDI2LTEuNjU1NSAxLjI3Ny0yLjU5NjUuMzE1LTEuOTIyMi4xODctMi44ODc2Yy0uMTI4LS45NjU1LS40NDUtMS44OTYyLS45MzMtMi43MzkyLS40ODgtLjg0MjktMS4xMzctMS41ODE2LTEuOTEtMi4xNzM3LS43NzQtLjU5MjEtMS42NTYtMS4wMjYxLTIuNTk3LTEuMjc3Mi0xLjktLjUwNzItMy45MjQtLjIzODctNS42MjcuNzQ2NGwtMTQuMTI5OCA4LjE5OTF2LTE2LjI2MTNjLS4wNjU0LTEuOTI3Ny0uODc3Mi0zLjc1NDYtMi4yNjQtNS4wOTUyLTEuMzg2OC0xLjM0MDUtMy4yNDAyLTIuMDg5OS01LjE2OTEtMi4wODk5LTEuOTI4OCAwLTMuNzgyMi43NDk0LTUuMTY5IDIuMDg5OS0xLjM4NjkgMS4zNDA2LTIuMTk4NyAzLjE2NzUtMi4yNjQxIDUuMDk1MnYyOS4wMTM2Yy0uMDAyMyAxLjg3ODQuNzA4NiAzLjY4NzYgMS45ODkyIDUuMDYxOSAxLjI4MDUgMS4zNzQzIDMuMDM1MSAyLjIxMTEgNC45MDkgMi4zNDEyem0tMjUuODU1NCAzMS41Mjk4Yy0xLjQ5MTYtLjEyMTMtMi45ODQ3LjIxNjItNC4yNzkzLjk2NzFsLTI1LjI5MDUgMTQuNDg5M2MtMS43MDI0Ljk4NS0yLjk0MzggMi42MDctMy40NTEgNC41MDdzLS4yMzg3IDMuOTI0Ljc0NjUgNS42MjdjLjk4NTEgMS43MDIgMi42MDYxIDIuOTQzIDQuNTA2NSAzLjQ1MSAxLjkwMDQuNTA3IDMuOTI0NC4yMzggNS42MjY4LS43NDdsMTQuMTQ3My04LjEyMnYxNi4yNjFjLjA2NTQgMS45MjguODc3MiAzLjc1NSAyLjI2NDEgNS4wOTYgMS4zODY4IDEuMzQgMy4yNDAyIDIuMDkgNS4xNjkgMi4wOSAxLjkyODkgMCAzLjc4MjMtLjc1IDUuMTY5MS0yLjA5IDEuMzg2OC0xLjM0MSAyLjE5ODYtMy4xNjggMi4yNjQtNS4wOTZ2LTI5LjA2NDVjLS4wMDM4LTEuODY5LS43MTQ0LTMuNjY3My0xLjk4OTEtNS4wMzQxcy0zLjAxOTItMi4yMDA5LTQuODgzNC0yLjMzNDh6bS02Ljg0NjgtMTMuNTgyNWMuNDg3NS0xLjYwMzIuNDE0LTMuMzI0Ny0uMjA4My00Ljg4MDYtLjYyMjQtMS41NTU5LTEuNzU2NC0yLjg1MzItMy4yMTUyLTMuNjc3OWwtMjUuMjMwNi0xNC41NDFjLTEuNzA0LS45NzYtMy43MjQ4LTEuMjM4NS01LjYyMTYtLjczMDMtMS44OTY4LjUwODMtMy41MTU1IDEuNzQ2MS00LjUwMzIgMy40NDMzLS40ODg5Ljg0LS44MDY1IDEuNzY4Ni0uOTM0MyAyLjczMi0uMTI3OC45NjM1LS4wNjMyIDEuOTQyOC4xODk5IDIuODgxMS4yNTMxLjkzODQuNjg5NyAxLjgxNzMgMS4yODQ2IDIuNTg1OC41OTUuNzY4NiAxLjMzNjQgMS40MTE1IDIuMTgxNCAxLjg5MTdsMTQuMDk2IDguMTIyMS0xNC4wOTYgOC4wNTM2Yy0uODQyOS40ODYxLTEuNTgxOSAxLjEzMzQtMi4xNzQ2IDEuOTA1MS0uNTkyOC43NzE3LTEuMDI3NyAxLjY1MjYtMS4yODAxIDIuNTkyMy0uMjUyMy45Mzk4LS4zMTcgMS45MjAxLS4xOTA1IDIuODg0OXMuNDQxOCAxLjg5NTIuOTI3OSAyLjczODJjLjQ4NjEuODQyOSAxLjEzMzUgMS41ODE5IDEuOTA1MSAyLjE3NDYuNzcxNy41OTI4IDEuNjUyNiAxLjAyNzcgMi41OTI0IDEuMjgwMS45Mzk3LjI1MjMgMS45Mi4zMTcgMi44ODQ4LjE5MDVzMS44OTUyLS40NDE4IDIuNzM4Mi0uOTI3OWwyNS4yMzA2LTE0LjU0OTZjMS42MjQ4LS45MDcxIDIuODQ1MS0yLjM5NjUgMy40MTQ5LTQuMTY4eiIgZmlsbD0iI2ZmZiIgZmlsbC1ydWxlPSJldmVub2RkIi8+PC9nPjwvc3ZnPg==
https://docs.snowflake.com/en/user-guide/snowsql-install-config
Installing SnowSQL on Linux using the installer¶ This section describes how to download, verify, and run the installer package to install SnowSQL on Linux. To upgrade SnowSQL manually (such as if your software installation policy prohibits upgrading automatically), use the RPM package to install SnowSQL. The RPM package does not set up SnowSQL to upgrade automatically. For instructions, see Installing SnowSQL on Linux using the RPM package (in this topic). Setting the download directory and configuration file location¶ By default, the SnowSQL installer downloads the binaries to the following directory: ~/.snowsql Consequently, the configuration file is located under the download directory: ~/.snowsql/config To change both the download directory and location of the configuration file, set the WORKSPACE environment variable to any user-writable directory. This approach is particularly useful if you have an isolated SnowSQL environment for each process. In addition, you can separate the download directory from the configuration file by setting the SNOWSQL_DOWNLOAD_DIR environment variable so that multiple SnowSQL processes can share the binaries. For example: $SNOWSQL_DOWNLOAD_DIR=/var/shared snowsql -h Copy Note that SNOWSQL_DOWNLOAD_DIR is supported starting with the SnowSQL 1.1.70 bootstrap version. To check the version you are using, execute the following command from the terminal window prompt: $ snowsql --bootstrap-version Copy Downloading the SnowSQL installer¶ Go to the SnowSQL Download page, find the version of the SnowSQL that you want to install, and download the files with the following filename extensions: .bash (the installer script) .bash.sig (the signature that you can use to verify the downloaded package) Using curl to download the SnowSQL installer¶ If you want to download the installer from a script or a terminal window (such as using curl, rather than your web browser), you can download the installers directly from the Snowflake Client Repository. For increased flexibility, Snowflake provides both Amazon Web Services (AWS) and Azure endpoints for the repository. Accounts hosted on any supported cloud platform can download the installer from either endpoint. Run curl (or an equivalent command line tool) to download the installer. The curl syntax is as follows: AWS endpoint: $ curl -O https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/<bootstrap_version>/linux_x86_64/snowsql-<version>-linux_x86_64.bash Copy Microsoft Azure endpoint: $ curl -O https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/<bootstrap_version>/linux_x86_64/snowsql-<version>-linux_x86_64.bash Copy Where: <version> is the combined SnowSQL major, minor, and patch versions. For example, for version 1.3.1, the major version is 1, the minor version is 3, and the patch version is 1. So, the version is 1.3.1. <bootstrap_version> is the combined SnowSQL major and minor versions. For example, for version 1.3.1, the major version is 1 and the minor version is 23, so the bootstrap version is 1.3. For example, to download the SnowSQL installer where <bootstrap_version> is 1.3 and <version> is 1.3.2: AWS endpoint: $ curl -O https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.3/linux_x86_64/snowsql-1.3.2-linux_x86_64.bash Microsoft Azure endpoint: $ curl -O https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/1.3/linux_x86_64/snowsql-1.3.2-linux_x86_64.bash For more information about SnowSQL versions, see Understanding SnowSQL Versioning (in this topic). Verifying the package signature¶ To verify the signature for the downloaded package: Download and import the latest Snowflake GPG public key from the Classic Console or the public keyserver. Download from the web interface: In the Classic Console, select Help » Download… to display the Downloads dialog. Select CLI Client (snowsql) on the left, then select the Snowflake GPG Public Key icon on the right. Download from the keyserver: Enter the following command, using the GPG key associated with the SnowSQL version: For SnowSQL 1.2.24 and higher: $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 630D9F3CAB551AF3 For SnowSQL version 1.2.11 through 1.2.23: $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 37C7086698CB005C For SnowSQL version 1.1.75 through 1.2.10: $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys EC218558EABB25A1 For SnowSQL version 1.1.74 and lower: $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 93DB296A69BE019A Note If this command fails with the following error: gpg: keyserver receive failed: Server indicated a failure Copy then specify that you want to use port 80 for the keyserver: gpg --keyserver hkp://keyserver.ubuntu.com:80 ... Copy Download the GPG signature and verify the signature: # If you prefer to use curl to download the signature file, run this command: curl -O\https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.3/linux_x86_64/snowsql-\ |snowsql-version|\ -linux_x86_64.bash.sig # Verify the package signature. gpg --verify snowsql-\ |snowsql-version|\ -linux_x86_64.bash.sig snowsql-\ |snowsql-version|\ -linux_x86_64.bash Copy or, if you are downloading the signature file from the Azure endpoint: # If you prefer to use curl to download the signature file, run this command: curl -O\https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/1.3/linux_x86_64/snowsql-\ |snowsql-version|\ -linux_x86_64.bash.sig # Verify the package signature. gpg --verify snowsql-\ |snowsql-version|\ -linux_x86_64.bash.sig snowsql-\ |snowsql-version|\ -linux_x86_64.bash Copy Note Verifying the signature produces a warning similar to the following: gpg: Signature made Mon 24 Sep 2018 03:03:45 AM UTC using RSA key ID <gpg_key_id> gpg: Good signature from "Snowflake Computing <snowflake_gpg@snowflake.net>" unknown gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Copy To avoid the warning, you can grant the Snowflake GPG public key implicit trust. Your local environment can contain multiple GPG keys; however, for security reasons, Snowflake periodically rotates the public GPG key. As a best practice, we recommend deleting the existing public key after confirming that the latest key works with the latest signed package. For example: gpg --delete-key"Snowflake Computing" Copy Installing SnowSQL using the installer¶ Open a terminal window. Run the Bash script installer from the download location: bash snowsql-linux_x86_64.bash Copy Follow the instructions provided by the installer. Note The installation can be automated by setting the following environment variables: SNOWSQL_DEST: Target directory of the snowsql executable. SNOWSQL_LOGIN_SHELL: The login shell initialization file, which includes the PATH environment update. SNOWSQL_DEST=~/binSNOWSQL_LOGIN_SHELL=~/.profile bash snowsql-linux_x86_64.bash Copy When you install a new major or minor version, SnowSQL does not upgrade itself immediately. Rather, you must log into your Snowflake account using SnowSQL and remain connected for a sufficient period of time for the auto-upgrade feature to upgrade the client to the latest release. To verify the SnowSQL version that currently starts when you run the client, use the -v option without a value: snowsql -v Copy Version: 1.3.1 To force SnowSQL to install and use a specific version, use the -v option and specify the version you want to install. For example, execute the following command for version 1.3.0: snowsql -v1.3.0 Copy Installing SnowSQL on macOS using the installer¶ This section describes how to download and run the installer package to install SnowSQL on macOS. Setting the download directory and configuration file location¶ By default, the SnowSQL installer downloads the binaries to the following directory: ~/.snowsql Consequently, the configuration file is located under the download directory: ~/.snowsql/config You can change both the download directory and location of the configuration file by setting the WORKSPACE environment variable to any user-writable directory. This is particularly useful if you have an isolated SnowSQL environment for each process. In addition, you can separate the download directory from the configuration file by setting the SNOWSQL_DOWNLOAD_DIR environment variable so that multiple SnowSQL processes can share the binaries. For example: SNOWSQL_DOWNLOAD_DIR=/var/shared snowsql -h Copy Note that SNOWSQL_DOWNLOAD_DIR is supported starting with the SnowSQL 1.1.70 bootstrap version. To check the version you are using, execute the following command from the terminal window prompt: snowsql --bootstrap-version Copy Downloading the SnowSQL installer¶ To download the SnowSQL installer, go to the SnowSQL Download page. This version of the SnowSQL installer enables auto-upgrade for patches. Using curl to download the SnowSQL installer¶ If you want to download the installer from a script or a terminal window (such as using curl, rather than your web browser), you can download the installers directly from the Snowflake Client Repository. For increased flexibility, Snowflake provides both Amazon Web Services (AWS) and Azure endpoints for the repository. Accounts hosted on any supported cloud platform can download the installer from either endpoint. Run curl (or an equivalent command line tool) to download the installer. The curl syntax is as follows: AWS endpoint: curl -O https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/<bootstrap_version>/darwin_x86_64/snowsql-<version>-darwin_x86_64.pkg Copy Microsoft Azure endpoint: curl -O https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/<bootstrap_version>/darwin_x86_64/snowsql-<version>-darwin_x86_64.pkg Copy where: <version> is the combined SnowSQL major, minor, and patch versions. For example, for version 1.3.1, the major version is 1, the minor version is 3, and the patch version is 1. So, the version is 1.3.1. <bootstrap_version> is the combined SnowSQL major and minor versions. For example, for version 1.3.1, the major version is 1 and the minor version is 3, so the bootstrap version is 1.3. For example, to download the SnowSQL installer where <bootstrap_version> is 1.3 and <version> is 1.3.2: AWS endpoint: curl -O\https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.3/darwin_x86_64/snowsql-\ |snowsql-version|\ -darwin_x86_64.pkg Copy Microsoft Azure endpoint: curl -O\https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/1.3/darwin_x86_64/snowsql-\ |snowsql-version|\ -darwin_x86_64.pkg Copy For more information about SnowSQL versions, see Understanding SnowSQL Versioning (in this topic). The macOS operating system can verify the installer signature automatically, so GPG signature verification is not needed. Installing SnowSQL using the installer¶ Open snowsql-darwin_x86_64.pkg in the download location to run the installer PKG file. Follow the instructions provided by the installer. Note The installation can be automated by running the installer from the command line. The target directory can be set to either CurrentUserHomeDirectory (~/Applications directory) or LocalSystem (/Applications directory): installer -pkg snowsql-darwin_x86_64.pkg -target CurrentUserHomeDirectory Copy When you install a new major or minor version, SnowSQL does not upgrade itself immediately. Rather, you must log into your Snowflake account using SnowSQL and remain connected for a sufficient period of time for the auto-upgrade feature to upgrade the client to the latest release. To verify the SnowSQL version that currently starts when you run the client, use the -v option without a value: snowsql -v Copy Version: 1.3.0 To force SnowSQL to install and use a specific version, use the -v option and specify the version you want to install. For example, execute the following command for version 1.3.1: snowsql -v1.3.1 Copy Configuring the Z shell alias (macOS only)¶ If Z shell (also known as zsh) is your default terminal shell, set an alias to the SnowSQL executable so that you can run SnowSQL on the command line in Terminal. The SnowSQL installer installs the executable in /Applications/SnowSQL.app/Contents/MacOS/snowsql and appends this path to the PATH or alias entry in ~/.profile. Because zsh does not normally read this file, add an alias to this path in ~/.zshrc, which zsh does read. To add an alias to the SnowSQL executable: Open (or create, if missing) the ~/.zshrc file. Add the following line: aliassnowsql=/Applications/SnowSQL.app/Contents/MacOS/snowsql Copy Save the file. Installing SnowSQL on macOS using homebrew cask¶ Homebrew Cask is a popular extension of Homebrew used for package distribution, installation, and maintenance. There is no separate SnowSQL installer to download. If Homebrew Cask is installed on your macOS platform, you can install Snowflake directly. Run the brew install command, specifying snowflake-snowsql as the cask to install: $ brew install --cask snowflake-snowsql Copy Configuring the Z shell alias (macOS only)¶ If Z shell (also known as zsh) is your default terminal shell, set an alias to the SnowSQL executable so that you can run SnowSQL on the command line in Terminal. The SnowSQL installer installs the executable in /Applications/SnowSQL.app/Contents/MacOS/snowsql and appends this path to the PATH or alias entry in ~/.profile. Because zsh does not normally read this file, add an alias to this path in ~/.zshrc, which zsh does read. To add an alias to the SnowSQL executable: Open (or create, if missing) the ~/.zshrc file. Add the following line: aliassnowsql=/Applications/SnowSQL.app/Contents/MacOS/snowsql Copy Save the file. Installing SnowSQL on Microsoft Windows using the installer¶ This section describes how to download and run the installer package to install SnowSQL on Microsoft Windows. Setting the download directory and configuration file location¶ By default, the SnowSQL installer downloads the binaries to the following directory: %USERPROFILE%\.snowsql Consequently, the configuration file is located under the download directory: %USERPROFILE%\.snowsql\config You can change both the download directory and location of the configuration file by setting the WORKSPACE environment variable to any user-writable directory. This is particularly useful if you have an isolated SnowSQL environment for each process. In addition, you can separate the download directory from the configuration file by setting the SNOWSQL_DOWNLOAD_DIR environment variable so that multiple SnowSQL processes can share the binaries. For example: SNOWSQL_DOWNLOAD_DIR=/var/shared snowsql -h Copy Note that SNOWSQL_DOWNLOAD_DIR is supported starting with the SnowSQL 1.1.70 bootstrap version. To check the version you are using, execute the following command from the terminal window prompt: snowsql --bootstrap-version Copy Downloading the SnowSQL installer¶ To download the SnowSQL installer, go to the SnowSQL Download page. This version of the SnowSQL installer enables auto-upgrade for patches. Using curl to download the SnowSQL installer¶ If you want to download the installer from a script or a terminal window (such as using curl, rather than your web browser), you can download the installers directly from the Snowflake Client Repository. For increased flexibility, Snowflake provides both Amazon Web Services (AWS) and Azure endpoints for the repository. Accounts hosted on any supported cloud platform can download the installer from either endpoint. Run curl (or an equivalent command line tool) to download the installer. The curl syntax is as follows: AWS endpoint: curl -O https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/<bootstrap_version>/windows_x86_64/snowsql-<version>-windows_x86_64.msi Copy Microsoft Azure endpoint: curl -O https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/<bootstrap_version>/windows_x86_64/snowsql-<version>-windows_x86_64.msi Copy Where: <version> is the combined SnowSQL major, minor, and patch versions. For example, for version 1.3.1, the major version is 1, the minor version is 3, and the patch version is 1. So, the version is 1.3.1. <bootstrap_version> is the combined SnowSQL major and minor versions. For example, for version 1.3.1, the major version is 1 and the minor version is 3, so the bootstrap version is 1.3. For example, to download the SnowSQL installer where <bootstrap_version> is 1.3 and <version> is 1.3.2: AWS endpoint: curl -O\https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.3/windows_x86_64/snowsql-\ |snowsql-version|\ -windows_x86_64.msi Copy Microsoft Azure endpoint: curl -O\https://sfc-repo.azure.snowflakecomputing.com/snowsql/bootstrap/1.3/windows_x86_64/snowsql-\ |snowsql-version|\ -windows_x86_64.msi Copy For more information about SnowSQL versions, see Understanding SnowSQL Versioning (in this topic). The Windows operating system can verify the installer signature automatically, so GPG signature verification is not needed. Installing SnowSQL using the installer¶ Open snowsql-windows_x86_64.msi in the download location to run the installer MSI file. Follow the instructions provided by the installer. Note The installation can be automated by running the MSI installer msiexec from the command line. The target directory cannot be changed from %ProgramFiles%Snowflake SnowSQL. For example: C:\Users\<username> msiexec /i snowsql-windows_x86_64.msi /q Copy When you install a new major or minor version, SnowSQL does not upgrade itself immediately. Rather, you must log into your Snowflake account using SnowSQL and remain connected for a sufficient period of time for the auto-upgrade feature to upgrade the client to the latest release. To verify the SnowSQL version that currently starts when you run the client, use the -v option without a value: snowsql -v Copy Version: 1.3.1 To force SnowSQL to install and use a specific version, use the -v option and specify the version you want to install. For example, execute the following command for version 1.3.0: snowsql -v1.3.0 Copy Understanding SnowSQL versioning¶ SnowSQL version numbers consist of three digits: <major version>.<minor version>.<patch version>. For example, version 1.3.1 indicates the major version is 1, the minor version is 3, the patch version is 1. To determine the SnowSQL version that currently starts when you run the client, use the -v option without a value: snowsql -v Copy Version: 1.3.1 In general, the following guidelines apply to the different version types: Major version: A change in the major version indicates dramatic improvements in the underlying Snowflake service. A new major version breaks backward compatibility. You will need to download and install the latest SnowSQL version from the web interface. Minor version: A change in the minor version indicates improvements to support forward compatibility in either SnowSQL or the underlying Snowflake service. A new minor version does not break backward compatibility, but Snowflake strongly recommends that you download and install the latest SnowSQL version from the web interface. Patch version: A change in the patch version indicates small enhancements or bug fixes were applied. The auto-upgrade feature automatically installs all patch versions. For more information about the auto-upgrade feature, see What is Auto-upgrade? (in this topic). Note If Snowflake releases a new minor or patch version, the functionality in your current version should continue to work, but any newly-released bug fixes and features will not be available via the auto-upgrade feature. Therefore, we strongly recommended that you download and install the latest SnowSQL version when a new version is available. What is auto-upgrade?¶ Important Starting with version 1.3.0, SnowSQL disables automatic upgrades by default to avoid potential issues that can affect production environments when an automatic upgrade occurs. To upgrade, you should download and install new versions manually, preferably in a non-production environment. Snowflake strongly recommends you leave this setting disabled, but if want to install new versions automatically when they are released, you can disable the SnowSQL --noup option. If you choose to enable automatic upgrades for SnowSQL, SnowSQL automatically downloads the new binary in a background process and executes the current version. The next time you run SnowSQL, the new version starts. To illustrate the process: For a fresh installation, you download the SnowSQL installer (such as version 1.3.0) using the Snowflake web interface and install the client. Each time you run SnowSQL, the client checks whether a newer version is available in the SnowSQL upgrade repository. If a newer version (such as version 1.3.1) is available, SnowSQL downloads it as a background process while the current installed version. The next time you run SnowSQL, the client executes version 1.3.1 while checking if a newer version is available. Enabling auto-upgrade¶ The -o noup=<value> option lets you override the SnowSQL default behavior of requiring manual installations for new versions, where: True enables the no-upgrade behavior (Default value for version 1.3.0 and higher). SnowSQL does not automatically check for upgrades and automatically upgrades itself. False disables the no-upgrade behavior (Default value for version 1.2.32 and lower). SnowSQL automatically checks for upgrades and automatically upgrades itself if any new upgrade is available within the same major.minor version You can specify this option while logging into Snowflake to enable auto-upgrade during that specific session. For example: snowsql - onoup=False Copy Alternatively, add the noup = False option to the configuration file to enable automatic upgrades for SnowSQL. Running a previous SnowSQL version¶ Note If you are running SnowSQL version 1.3.0 or newer, you cannot use this process to run a 1.2.x version. If you want to run a 1.2.x version, you must download and install the earlier version manually. If you encounter an issue with the latest SnowSQL version, such as version 1.3.1, you can temporarily run another 1.3.x version. To determine the SnowSQL version that currently starts when you run the client, use the -v option without a value: $ snowsql -v Version:1.3.1 Copy To display a list of available SnowSQL versions, use the --versions option: $ snowsql --versions 1.3.1 1.3.0 Copy To install an earlier SnowSQL version from the list, use the -v option and specify the version you want to install. For example, to install version 1.3.0 if you are running a newer version, such as 1.3.1: $ snowsql -v1.3.0 Installing version:1.3.0[####################################] 100% Copy Use the same option to specify the version you want to run when you start SnowSQL: $ snowsql -v1.3.0 Copy
8585
dbpedia
0
88
https://spcldace.readthedocs.io/en/latest/source/dace.html
en
dace package — DaCe 0.16.1 documentation
[]
[]
[]
[ "" ]
null
[]
null
en
null
Returns a per-dimension upper bound on the maximum number of elements in each dimension. This bound will be tight in the case of Range. Adds zeroes to the subset, in the indices contained in axes. The method is mostly used to restore subsets that had their zero-indices removed (i.e., squeezed subsets). Hence, the method is called ‘unsqueeze’. Examples (initial subset, axes -> result subset, output): - [i], [0] -> [0, i], [0] - [i], [0, 1] -> [0, 0, i], [0, 1] - [i], [0, 2] -> [0, i, 0], [0, 2] - [i], [0, 1, 2, 3] -> [0, 0, 0, 0, i], [0, 1, 2, 3] - [i], [0, 2, 3, 4] -> [0, i, 0, 0, 0], [0, 2, 3, 4] - [i], [0, 1, 1] -> [0, 0, 0, i], [0, 1, 2] Parameters: axes (Sequence[int]) – The axes where the zero-indices should be added. Return type: List[int] Returns: A list of the actual axes where the zero-indices were added. Adds 0:1 ranges to the subset, in the indices contained in axes. The method is mostly used to restore subsets that had their length-1 ranges removed (i.e., squeezed subsets). Hence, the method is called ‘unsqueeze’. Examples (initial subset, axes -> result subset, output): - [i:i+10], [0] -> [0:1, i], [0] - [i:i+10], [0, 1] -> [0:1, 0:1, i:i+10], [0, 1] - [i:i+10], [0, 2] -> [0:1, i:i+10, 0:1], [0, 2] - [i:i+10], [0, 1, 2, 3] -> [0:1, 0:1, 0:1, 0:1, i:i+10], [0, 1, 2, 3] - [i:i+10], [0, 2, 3, 4] -> [0:1, i:i+10, 0:1, 0:1, 0:1], [0, 2, 3, 4] - [i:i+10], [0, 1, 1] -> [0:1, 0:1, 0:1, i:i+10], [0:1, 1, 2] Parameters: axes (Sequence[int]) – The axes where the 0:1 ranges should be added. Return type: List[int] Returns: A list of the actual axes where the 0:1 ranges were added.
8585
dbpedia
0
30
https://www.kali.org/docs/development/advanced-packaging-example/
en
Advanced Packaging Step-By-Step Example (FinalRecon & Python-icmplib)
https://www.kali.org/ima…es/kali-logo.svg
https://www.kali.org/ima…es/kali-logo.svg
[]
[]
[]
[ "kali", "linux", "kalilinux", "Penetration", "Testing", "Penetration Testing", "Distribution", "Advanced" ]
null
[]
2023-06-06T00:00:00+00:00
This guide is accurate at the time of writing. As it references a lot of external resources out of our control, items may be different over time (as software gets updated). FinalRecon is a Python 3 application with multiple Python dependencies. At the time of writing, one of the dependencies (python3-icmplib) is not in the Kali Linux repository.
en
https://www.kali.org/images/favicon.png
Kali Linux
https://www.kali.org/docs/development/advanced-packaging-example/
This guide is accurate at the time of writing. As it references a lot of external resources out of our control, items may be different over time (as software gets updated). FinalRecon is a Python 3 application with multiple Python dependencies. At the time of writing, one of the dependencies (python3-icmplib) is not in the Kali Linux repository. In this guide we will have to learn how to follow dependency chains, and fix anything required to ensure that the end package can be included. We will also create a patch, helper-script, as well as a runtime test for the package. We will assume we have already followed our documentation on setting up a packaging environment as well as our previous other packaging guides #1 (Instaloader) & #2 (Photon) as this will explain their contents. FinalRecon Code Overview The first action we will take, will be to look at FinalRecon’s source code to see what information we can acquire. Using this, we notice the following: It has no tag release The MIT license file There is no setup.py file (which is used for setuptools) There is a requirements.txt file (which is used for pip) Various descriptions about the tool & usage guides Various external links (if any additional research is required) Missing Tag Releases As FinalRecon does not have a tag release we will have to create our own upstream tar file. Looking to see what branches there are, we discover there is just one (there isn’t a stable/production one, or is there a beta/deployment/staging one). As a result, we will use whatever is the latest commit on the main branch until the author does a tag release. We can auto open up an issue request and/or email them seeing if they will response to such an act. Note that having a “tagged” release is preferred when doing Debian packaging. End users often want something that is “stable”, and which has been fully tested. It’s also easier for the distribution to know when to update the package: we just wait for upstream to release something, which is a clear signal that the code is ready to be used. So when it’s possible, we favor packaging a tagged release over the latest Git commit. License This package has been detected as having a MIT license by GitHub. If we look at the specific license file we can see that there is not a lot to copy, so we will be copying this exactly as-is. Unfortunately, though we have found a maintainer we have not found any contact information yet. We will have to continue searching for contact information. Dependencies As there is a requirements.txt file (which is used for Python’s pip to install any Python dependencies that are required for this tool to work), we will need check to see what’s needed. Description(s) We will once again pull our description from FinalRecon’s GitHub. For the short description we will use a modified version of the first line in the README, “A fast and simple python script for web reconnaissance.” For the long description we will also use a modified version of the first line in the README, however we will expand this time, “A fast and simple python script for web reconnaissance that follows a modular structure and provides detailed information on various areas.” Maintainer(s) If we were to look all over the GitHub we would not find an email address. We could look in git log and view the email addresses associated, however these do not seem to be solid as there are multiple for “thewhiteh4t” (at least 3). Instead, we do more digging. We notice that there is a YouTube video demo linked in the README.md, if we go to the YouTube channel’s about page we can view an email address for business inquiries (which does not match to any in the git log). This will be a good choice to use as the contact information. With that said all said, not having a contact information is not an essential part , so if we were unable to find one we could still continue to package. Setting Up The Environment We will assume that we have already followed our documentation on setting up a packing environment. Let’s set up our directories now for this package: kali@kali:~$ mkdir -p ~/kali/packages/finalrecon/ ~/kali/upstream/ kali@kali:~$ Downloading Git Snapshot We’re going to download an archive of the upstream source code. Since upstream didn’t tag any release yet, we’ll package the latest Git commit on the main branch. There are different (many?) ways to do that, and in this example we will use uscan for the task. uscan is able to download a Git repository, pack it into a .tar.gz archive, and come up with a meaningful (and somewhat standard) version string. This last point is important: a Debian package must have a version, however a Git commit doesn’t have a version per se. So we need to associate a version with a Git commit, and there are many ways to get that wrong. So rather than deciding by ourselves what the package version should be, we’ll let the tooling (uscan in this case) do that for us. In order to use uscan, we need a watch file. This file is usually part of the packaging files, and located in debian/watch. Let’s start by entering the working directory, and then create the debian dir: kali@kali:~$ cd ~/kali/packages/finalrecon/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ mkdir debian kali@kali:~/kali/packages/finalrecon$ And now let’s create the watch file. The purpose of the watch file is to provide instructions to find the latest upstream release online. In this particular case though, upstream didn’t provide any tagged release yet, so we’ll configure the watch file to track the latest Git commit on the main branch: kali@kali:~/kali/packages/finalrecon$ vim debian/watch kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/watch version=4 opts="mode=git, pgpmode=none" \ https://github.com/thewhiteh4t/FinalRecon HEAD kali@kali:~/kali/packages/finalrecon$ At this point, we have enough to run uscan to download and pack the latest Git commit from upstream: kali@kali:~/kali/packages/finalrecon$ uscan --destdir ~/kali/upstream/ --force-download \ --package finalrecon --upstream-version 0~0 --watchfile debian/watch uscan: Newest version of finalrecon on remote site is 0.0~git20201107.0d41eb6, local version is 0~0 uscan: => Newer package available from: => https://github.com/thewhiteh4t/FinalRecon HEAD uscan warn: Missing debian/source/format, switch compression to gzip Successfully repacked ~/kali/upstream/finalrecon-0.0~git20201107.0d41eb6.tar.xz as ~/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz. This command warrants some explanations. Since at this point we run uscan from an almost empty directory, we need to be explicit about what we want to do. In particular: --watchfile tells uscan where is the watch file that we want it to use. --package is used to give the package name. --upstream-version is actually the “current upstream version”. In general, uscan works by comparing the latest version found online with the version that is currently packaged, and it downloads the latest upstream version only if it’s newer than the current version. However here there’s no “current version” since we’re creating a new package, so we tell uscan that the current version is 0~0, ie. the lowest version possible, so that whatever version found online is deemed higher than that. --destdir tells uscan where to save the download files. --force-download overrides uscan’s guess of what it should do: we want it to download the latest upstream version. To be sure, we can have a look in the ~/kali/upstream directory to check what files landed there: kali@kali:~/kali/packages/finalrecon$ ls ~/kali/upstream finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz finalrecon-0.0~git20201107.0d41eb6.tar.xz uscan packed the code from Git in a .tar.xz file, and for some reason (see the line starting with uscan warn: above), it repacked in as a .tar.gz. We don’t really care about the compression, we’re fine with both .gz and .xz. What matters is that we’ll use the file which name ends with .orig.tar.*, so we’re going to use the .tar.gz. uscan came up with a funny-looking (and rather complicated) version string: 0.0~git20201107.0d41eb6. Why is that? 0.0~ is the lowest starting point for a version string. It’s handy to start from there, so that whenever upstream does a “tagged release”, whatever they choose, it will be greater than our version. So we’ll be able to use it for the package version “as is”. git is informative, and it obviously refers to the VCS used by upstream (examples of other VCS: svn or bzr). 20201107 is the date (YYYYMMDD aka. ISO-8601 format) of the upstream commit that we package. Having the date part of the version string is needed so that whenever we’ll want to import a new Git snapshot, the date will be newer, and the new version string will be sorted above by the package manager (version strings must ALWAYS go ascending). 0d41eb6 is the Git commit. It’s informative, and it’s a non-ambiguous way to know exactly what upstream code is included in the package. Without it, a developer who wants to know what Git commit was packaged would rely on the date, and if there’s more than one commit on this date, it wouldn’t be clear what commit exactly was packaged. Additionally, this is an UTC date, while usual tools or web browser usuall show dates in local time: another source of error for those who rely on the date only. So having the Git commit part of the version string is really useful for developers (maybe not so much for users). Alright, we hope that you appreciated this overwhelming amount of information. Let’s move on and keep working on the package. Creating Package Source Code We are now going to create a new empty Git repository: kali@kali:~/kali/packages/finalrecon$ git init Initialized empty Git repository in /home/kali/kali/packages/finalrecon/.git/ kali@kali:~/kali/packages/finalrecon$ We can now import the .tar.gz we previously downloaded into the empty Git repository we just created. When prompted, we remember to accept the default values (or use the flag --no-interactive): kali@kali:~/kali/packages/finalrecon$ gbp import-orig ~/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz What will be the source package name? [finalrecon] What is the upstream version? [0.0~git20201107.0d41eb6] gbp:info: Importing '../upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz' to branch 'upstream'... gbp:info: Source package is finalrecon gbp:info: Upstream version is 0.0~git20201107.0d41eb6 gbp:info: Successfully imported version 0.0~git20201107.0d41eb6 of /home/kali/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz kali@kali:~/kali/packages/finalrecon$ We remember to change the default branch, from master to kali/master (as master is for upstream development), then delete the old branch. We also run a quick git branch -v to visually see the change: kali@kali:~/kali/packages/finalrecon$ git checkout -b kali/master Switched to a new branch 'kali/master' kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git branch -D master Deleted branch master (was 95b196b). kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git branch -v * kali/master bd003d7 New upstream version 0.0~git20201107.0d41eb6 pristine-tar 2413cfe pristine-tar data for finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz upstream bd003d7 New upstream version 0.0~git20201107.0d41eb6 kali@kali:~/kali/packages/finalrecon$ We can now populate the debian/ folder with its related files. We will manually specify the upstream .tar.gz file (as it is not located in ../, but instead ~/kali/upstream/). We will also set the package name to use in the same naming convention as before (<packagename>_<version> as is Debian standards). Note that we need to use the option --addmissing as there’s already a debian/ directory (we created it above for the only purpose of having a watch file). Afterwards we will remove any example files that get automatically generated, as they are not used: kali@kali:~/kali/packages/finalrecon$ dh_make --file ~/kali/upstream/finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz -p finalrecon_0.0~git20201107.0d41eb6 --addmissing --single -y Maintainer Name : Joseph O'Gorman Email-Address : [email protected] Date : Fri, 22 Apr 2022 11:33:33 +0000 Package Name : finalrecon Version : 0.0~git20201107.0d41eb6 License : blank Package Type : single Currently there is not top level Makefile. This may require additional tuning File watch.ex exists, skipping Done. Please edit the files in the debian/ subdirectory now. kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ rm -f debian/*.docs debian/README* debian/*.ex debian/*.EX kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git status On branch kali/master Untracked files: (use "git add <file>..." to include in what will be committed) debian/ nothing added to commit but untracked files present (use "git add" to track) kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ ls debian/ changelog control copyright rules source watch kali@kali:~/kali/packages/finalrecon$ At this point, we have the base packaging files in place, and it feels like a good idea to commit before starting some real work: kali@kali:~/kali/packages/finalrecon$ git add debian/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "Initial packaging files" [kali/master 52042da] Initial packaging files 6 files changed, 93 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/control create mode 100644 debian/copyright create mode 100755 debian/rules create mode 100644 debian/source/format create mode 100644 debian/watch kali@kali:~/kali/packages/finalrecon$ We can now start to edit the files in the debian/ folder to make sure the information is accurate. We can use what we found from before on FinalRecon’s GitHub to supply the correct information. To recap, we need to make sure we got the following bits of information to locate: Dependencies Description License Maintainers FinalRecon (Pip) Dependencies As there is a requirements.txt file (which is used for Python’s pip to install any Python dependencies that are required for this tool to work), we will need check to see what’s needed. For this tool to work, it requires additional software to be installed, aka dependencies. Depending on how the tool is coded, will depend on what is required (or only recommended) to be installed. FinalRecon is using various Python libraries and does not call any system commands. In Python’s eco-system, there is pip. This is Python’s package manager, which can be used to download and install any Python libraries. However, we are trying to build a package for Debian package management instead. As a result, any Python libraries need to be ported over to Debian format, in order for our package to use them (so the OS can track any files, allowing for cleaner upgrades and un-installs of packages). Lets start out by looking to see what is needed outside of the standard values, for this tool to work: kali@kali:~/kali/packages/finalrecon$ cat requirements.txt requests ipwhois bs4 lxml dnslib aiohttp aiodns psycopg2 tldextract icmplib kali@kali:~/kali/packages/finalrecon$ We then try to search for each dependency from requirements.txt in apt-cache, to make sure that we have everything in Kali Linux’ repository: kali@kali:~/kali/packages/finalrecon$ sudo apt update kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ apt-cache search ipwhois | grep -i python3 python3-ipwhois - Retrieve and parse whois data for IP addresses (Python 3) kali@kali:~/kali/packages/finalrecon$ We could search each one manually by repeating the above process for all items in requirements.txt, or we can make a quick loop to automate it. During this process, we will notice one dependency which does not have an entry (icmplib): kali@kali:~/kali/packages/finalrecon$ cat requirements.txt | while read x; do apt-cache search $x | grep -i "python3-$x -" \ || echo --MISSING $x--; done python3-requests - elegant and simple HTTP library for Python3, built for human beings python3-ipwhois - Retrieve and parse whois data for IP addresses (Python 3) python3-bs4 - error-tolerant HTML parser for Python 3 python3-lxml - pythonic binding for the libxml2 and libxslt libraries python3-dnslib - Module to encode/decode DNS wire-format packets (Python 3) python3-aiohttp - http client/server for asyncio python3-aiodns - Asynchronous DNS resolver library for Python 3 python3-psycopg2 - Python 3 module for PostgreSQL python3-tldextract - Python library for separating TLDs --MISSING icmplib-- kali@kali:~/kali/packages/finalrecon$ We can try and broaden our search for icmplib, as we were limiting output last time (by using grep): kali@kali:~/kali/packages/finalrecon$ apt-cache search icmplib | grep -i python3 kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ apt-cache search icmplib kali@kali:~/kali/packages/finalrecon$ Unfortunately it appears that Kali Linux does not have this dependency (Python’s icmplib) in the repository at this point in time. This means we will need to extend our packaging process to accommodate for packaging up icmplib as well, to allow us to completely package up FinalRecon. We will first look for icmplib in the pypi.org repository. We can easily find icmplib on PyPI along with the link to its GitHub page. If we do the same process with icmplib looking over the GitHub page as we did for FinalRecon, we can see that icmplib will not need additional dependencies (no requirements.txt file and setup.py does not list anything for install_requires) and therefore will be a relatively straightforward Python package. We can now either: Continue to package FinalRecon, before moving onto icmplib. We have to remember that we cannot successfully build a complete working package until we are done with icmplib. Pause FinalRecon packaging, and switch our focus to icmplib. We have to make sure we took detailed notes with the work we have done so far and information gathered. We will go with the former option, and continue as far as we can with FinalRecon. Editing FinalRecon Package Source Code We can now start to edit the files in the debian/ folder. Changelog We will now perform what are our standard changes (#1 (Instaloader) & #2 (Photon)) to the version, distribution and description. The resulting file should be similar to the following: kali@kali:~/kali/packages/finalrecon$ vim debian/changelog kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/changelog finalrecon (0.0~git20201107.0d41eb6-0kali1) kali-dev; urgency=medium * Initial release -- Joseph O'Gorman <[email protected]> Fri, 22 Apr 2022 11:33:33 +0700 kali@kali:~/kali/packages/finalrecon$ Control Using what we know from the information we have already gathered from GitHub and the source code, it is once again similar to our previous packaging guides (#1 (Instaloader) & #2 (Photon)). We should have a good understanding of what needs to be altered now. As there is no code that needs to be compiled, we can set Architecture: all. This is true for most Python scripts, as they are not providing Python “extensions”. If they are, they would generate a compiled .so files (e.g. psycopg2). We make sure to include the Python dependencies for building the package as well as the tool dependencies to run (the values from pip). There is one thing to note, and that is python3-icmplib. This package does not exist yet. We are adding this in for the time being as we will be creating it soon, to prevent going back and adding it we will add it now. This does mean that we will be unable to build our package until we finish with icmplib: kali@kali:~/kali/packages/finalrecon$ vim debian/control kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/control Source: finalrecon Section: misc Priority: optional Maintainer: Kali Developers <[email protected]> Uploaders: Joseph O'Gorman <[email protected]> Build-Depends: debhelper-compat (= 12), dh-python, python3-aiodns, python3-aiohttp, python3-all, python3-bs4, python3-dnslib, python3-icmplib, python3-ipwhois, python3-lxml, python3-psycopg2, python3-requests, python3-tldextract, Standards-Version: 4.5.0 Homepage: https://github.com/thewhiteh4t/FinalRecon Vcs-Browser: https://gitlab.com/kalilinux/packages/finalrecon Vcs-Git: https://gitlab.com/kalilinux/packages/finalrecon Package: finalrecon Architecture: all Depends: ${misc:Depends}, ${python3:Depends}, python3-aiodns, python3-aiohttp, python3-bs4, python3-dnslib, python3-icmplib, python3-ipwhois, python3-lxml, python3-psycopg2, python3-requests, python3-tldextract, Description: A fast and simple python script for web reconnaissance A fast and simple python script for web reconnaissance that follows a modular structure and provides detailed information on various areas. kali@kali:~/kali/packages/finalrecon$ Copyright As we have already finished getting the copyright information (license, name, contact, year and source), we now just need to add it: kali@kali:~/kali/packages/finalrecon$ vim debian/copyright kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/copyright Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: finalrecon Upstream-Contact: thewhiteh4t <[email protected]> Source: https://github.com/thewhiteh4t/FinalRecon Files: * Copyright: 2020 thewhiteh4t <[email protected]> License: MIT Files: debian/* Copyright: 2020 Joseph O'Gorman <[email protected]> License: MIT License: MIT Copyright (c) 2020 thewhiteh4t . Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: . The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. . THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. kali@kali:~/kali/packages/finalrecon$ Rules The start of the rules file will look very similar to #2 (Photon), however there is a new lower section. This part is to set the permissions on finalrecon.py, so when we call it using the symlinks (by debian/links), it will be executable: kali@kali:~/kali/packages/finalrecon$ vim debian/rules kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/rules #!/usr/bin/make -f #export DH_VERBOSE = 1 export PYBUILD_NAME=finalrecon %: dh $@ --with python3 override_dh_install: dh_install chmod 0755 debian/finalrecon/usr/share/finalrecon/finalrecon.py kali@kali:~/kali/packages/finalrecon$ Beware that the “dh” line needs to be indented by a single tabulation character, rather than spaces. Watch The watch file was already covered at the beginning of this example, and is configured to track the latest Git commit on the main branch. You can also add the common configuration for GitHub, but leave it commented out, so that whenever upstream will issue a release, everything is ready in your watch file and you’ll just need to uncomment it: kali@kali:~/kali/packages/finalrecon$ vim debian/watch kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/watch version=4 opts=mode=git,pgpmode=none \ https://github.com/thewhiteh4t/FinalRecon HEAD # Use the following when upstream starts to tag releases: #opts=filenamemangle=s/.+\/v?(\d\S+)\.tar\.gz/finalrecon-$1\.tar\.gz/ \ # https://github.com/thewhiteh4t/FinalRecon/tags .*/v?(\d\S+)\.tar\.gz kali@kali:~/kali/packages/finalrecon$ Whereas last time (#1 (Instaloader) & #2 (Photon)), we are not going to use a “helper-script” but instead create a symlink pointing to the main Python file, which will still be in $PATH: kali@kali:~/kali/packages/finalrecon$ vim debian/links kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/links usr/share/finalrecon/finalrecon.py usr/bin/finalrecon kali@kali:~/kali/packages/finalrecon$ .Install We can now create the install file, which is required to say what files go where on the system during the unpacking of the package. We need to make sure to include everything from the root of the package directory: kali@kali:~/kali/packages/finalrecon$ vim debian/finalrecon.install kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/finalrecon.install conf usr/share/finalrecon/ finalrecon.py usr/share/finalrecon/ modules usr/share/finalrecon/ wordlists usr/share/finalrecon/ kali@kali:~/kali/packages/finalrecon$ There is not a leading slash on the destination directory Patches For this tool we will need to also implement a patch to disable the update and dependency checker. If the program self updates, the system will not be aware of any additional files outside of the package, so things then start to get messy. The dependency is also being handled by our package now instead. Knowing you need to do this, comes with either knowing the tool, or auditing the source code. The patch process looks like the following (for more information see our previous guide, #2 (Photon)): kali@kali:~/kali/packages/finalrecon$ gbp pq import gbp:info: Trying to apply patches at 'f1c4c9f8d25224186749ce69a9f403f207feda03' gbp:info: 0 patches listed in 'debian/patches/series' imported on 'patch-queue/kali/master' kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ vim finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git add finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "disable requirements check" [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ vim finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git add finalrecon.py kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "disable ver_check" [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ gbp pq export gbp:info: On 'patch-queue/kali/master', switching to 'kali/master' gbp:info: Generating patches from git (kali/master..patch-queue/kali/master) kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git branch -v * kali/master bd003d7 New upstream version 0.0~git20201107.0d41eb6 patch-queue/kali/master 2935f22 disable ver_check pristine-tar 2413cfe pristine-tar data for finalrecon_0.0~git20201107.0d41eb6.orig.tar.gz upstream bd003d7 New upstream version 0.0~git20201107.0d41eb6 kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ ls debian/patches/ disable-requirements-check.patch disable-ver_check.patch series kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/patches/disable-requirements-check.patch From: Joseph O'Gorman <[email protected]> Subject: disable requirements check --- finalrecon.py | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/finalrecon.py b/finalrecon.py index 735f40b..95e99f1 100644 --- a/finalrecon.py +++ b/finalrecon.py @@ -26,22 +26,22 @@ else: path_to_script = os.path.dirname(os.path.realpath(__file__)) -with open(path_to_script + '/requirements.txt', 'r') as rqr: - pkg_list = rqr.read().strip().split('\n') - -print('\n' + G + '[+]' + C + ' Checking Dependencies...' + W + '\n') - -for pkg in pkg_list: - spec = importlib.util.find_spec(pkg) - if spec is None: - print(R + '[-]' + W + ' {}'.format(pkg) + C + ' is not Installed!' + W) - fail = True - else: - pass -if fail == True: - print('\n' + R + '[-]' + C + ' Please Execute ' + W + 'pip3 install -r requirements.txt' + C + ' to Install Missing Packages' + W + '\n') - os.remove(pid_path) - sys.exit() +#with open(path_to_script + '/requirements.txt', 'r') as rqr: +# pkg_list = rqr.read().strip().split('\n') + +#print('\n' + G + '[+]' + C + ' Checking Dependencies...' + W + '\n') + +#for pkg in pkg_list: +# spec = importlib.util.find_spec(pkg) +# if spec is None: +# print(R + '[-]' + W + ' {}'.format(pkg) + C + ' is not Installed!' + W) +# fail = True +# else: +# pass +#if fail == True: +# print('\n' + R + '[-]' + C + ' Please Execute ' + W + 'pip3 install -r requirements.txt' + C + ' to Install Missing Packages' + W + '\n') +# os.remove(pid_path) +# sys.exit() import argparse kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/patches/disable-requirements-check.patch From: Joseph O'Gorman <[email protected]> Subject: disable ver_check --- finalrecon.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/finalrecon.py b/finalrecon.py index 95e99f1..d21877c 100644 --- a/finalrecon.py +++ b/finalrecon.py @@ -207,7 +207,7 @@ def full_recon(): try: fetch_meta() banner() - ver_check() + #ver_check() if target.startswith(('http', 'https')) == False: print(R + '[-]' + C + ' Protocol Missing, Include ' + W + 'http://' + C + ' or ' + W + 'https://' + '\n') kali@kali:~/kali/packages/finalrecon$ Runtime Test The runtime test process looks like the following (for more information see our previous guide, #2 (Photon)). Just like last time, we will just create a minimal test to look for the help screen: kali@kali:~/kali/packages/finalrecon$ mkdir -p debian/tests/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ vim debian/tests/control kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ cat debian/tests/control Test-Command: finalrecon -h Restrictions: superficial kali@kali:~/kali/packages/finalrecon$ Completing dependencies In theory, we should have a complete working package now with the exception of the missing icmplib dependency. So we now need to package up icmplib, before trying to finally build FinalRecong. icmplib Naming Packages Unlike our previous guides (#1 (Instaloader) & #2 (Photon)) where we use the same name for both source package and binary package, this time we will differ them. The naming convention for a binary package is python3-<package>, which is important to follow as it has a impact at a technical level. However a source package can be python-<package> (or even just <package>). It does not matter if this is not followed as it will not break anything if its not followed. However, from a Kali team point of view we prefer and will use python-<package>. See this Debian resource for more information. Cheat Sheet Packaging This package is straightforward using python3-setuptools (like in our first guide (Instaloader)), so to prevent this guide from getting too long, we will not be going step by step for icmplib. For more information on building Python libraries, see the Debian resource Here is a quick overview of the commands needed to build the package: mkdir -p ~/kali/upstream/ ~/kali/build-area/ ~/kali/packages/python-icmplib/ wget https://github.com/ValentinBELYN/icmplib/archive/v1.2.2.tar.gz -O ~/kali/upstream/python-icmplib_1.2.2.orig.tar.gz cd ~/kali/packages/python-icmplib/ git init gbp import-orig ~/kali/upstream/python-icmplib_1.2.2.orig.tar.gz --no-interactive --debian-branch=kali/master dh_make --file ~/kali/upstream/python-icmplib_1.2.2.orig.tar.gz -p python-icmplib_1.2.2 --python -y rm -f debian/{*.docs,README*,*.ex,*.EX} vim debian/changelog vim debian/control vim debian/copyright vim debian/rules vim debian/watch gbp buildpackage --git-builder=sbuild --git-export=WC sudo dpkg -i ~/kali/build-area/python3-icmplib_1.2.2-0kali1_all.deb pip search icmplib git add debian/ git commit -m "Initial release" Previewing the contents of the key filesin debian/: Changelog Straight forward, like all the other guides, #1 (Instaloader) & #2 (Photon), edit version, distribution and description. Note, python-icmplib needs to match the source name in debian/control: kali@kali:~/kali/packages/python-icmplib$ cat debian/changelog python-icmplib (1.2.2-0kali1) kali-dev; urgency=medium * Initial release -- Joseph O'Gorman <[email protected]> Mon, 12 Oct 2020 18:10:27 -0400 kali@kali:~/kali/packages/python-icmplib$ Control This is a bit different to what we have seen previously with Section: python. This is because its a Python library. For more information see Debian’s write up as well as the different options. We also need to name the package differently. The source package part of debian/control is the top part, which gets named with the Source: field, whereas the binary part the lower half and uses Package: to name. Were possible Kali Linux will always try and do both a source and binary package (See the Debian resource for more information). Note, the source name python-icmplib needs to match in debian/changelog: kali@kali:~/kali/packages/python-icmplib$ cat debian/control Source: python-icmplib Section: python Priority: optional Maintainer: Kali Developers <[email protected]> Uploaders: Joseph O'Gorman <[email protected]> Build-Depends: debhelper-compat (= 12), dh-python, python3-all, python3-setuptools Standards-Version: 4.5.0 Homepage: https://github.com/ValentinBELYN/icmplib Vcs-Browser: https://gitlab.com/kalilinux/packages/python-icmplib Vcs-Git: https://gitlab.com/kalilinux/packages/python-icmplib.git Package: python3-icmplib Architecture: all Depends: ${python3:Depends}, ${misc:Depends} Description: Python tool to forge ICMP packages icmplib is a brand new and modern implementation of the ICMP protocol in Python Able to forge ICMP packages to make your own ping, multiping, traceroute etc kali@kali:~/kali/packages/python-icmplib$ Copyright As we renamed the orig.tar.gz, upstream name is incorrect, as it normally would not have a leading python3-. We can get this from the source URL: kali@kali:~/kali/packages/python-icmplib$ cat debian/copyright Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: icmplib Upstream-Contact: Valentin BELYN <[email protected]> Source: https://github.com/ValentinBELYN/icmplib Files: * Copyright: 2020 Valentin BELYN <[email protected]> License: LGPL-3+ Files: debian/* Copyright: 2020 Joseph O'Gorman <[email protected]> License: LGPL-3+ License: LGPL-3+ This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. . This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. . You should have received a copy of the GNU Lesser General Public License along with this program; if not, see <https://www.gnu.org/licenses/>. . On Debian systems, the full text of the GNU Lesser General Public License version 3 can be found in the file `/usr/share/common-licenses/LGPL-3'. kali@kali:~/kali/packages/python-icmplib$ Rules We need to make sure to drop any leading python- when being defined in PYBUILD_NAME, even though the binary package which gets produced (as defined in debian/control) will be python3-icmplib. This is because of PyBuild, only wanting the Python module name: kali@kali:~/kali/packages/python-icmplib$ cat debian/rules #!/usr/bin/make -f #export DH_VERBOSE = 1 export PYBUILD_NAME=icmplib %: dh $@ --with python3 --buildsystem=pybuild kali@kali:~/kali/packages/python-icmplib$ Watch Straight forward, like all the other guides, #1 (Instaloader) & #2 (Photon), using the Debian standard watch file for GitHub: kali@kali:~/kali/packages/python-icmplib$ cat debian/watch version=4 opts=filenamemangle=s/.+\/v?(\d\S+)\.tar\.gz/icmplib-$1\.tar\.gz/ \ https://github.com/ValentinBELYN/icmplib/tags .*/v?(\d\S+)\.tar\.gz kali@kali:~/kali/packages/python-icmplib$ We have successfully managed to build a Python 3 library file, icmplib! Final FinalRecon Build As we may not have pushed out, had python3-icmplib being accepted yet into Kali Linux, or you may want to submit both at the same time, we can include the recently generated package in the chroot for sbuild to use, it is a listed requirement for FinalRecon. We are also unsure about the status of the package, we may not want to commit the latest edits to Git. So we will add --git-export=WC when building the package: kali@kali:~/kali/packages/python-icmplib$ cd ~/kali/packages/finalrecon/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ gbp buildpackage \ --git-builder=sbuild --git-export=WC \ --extra-package=$HOME/kali/build-area/python3-icmplib_1.2.2-0kali1_all.deb [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ ls -lah ~/kali/build-area/finalrecon_*.deb -rw-rw-r-- 1 kali kali 83K Nov 8 07:44 /home/kali/kali/build-area/finalrecon_0.0~git20201107.0d41eb6-0kali1_all.deb kali@kali:~/kali/packages/finalrecon$ Before we try to test our newly generated package, we remember that in debian/control we listed a few dependencies (not only to build the package but to run the package). Using dpkg, it will not satisfy these requirements, so we need to manually install them first. We can check what is missing from our operating system, by doing: kali@kali:~/kali/packages/finalrecon$ dpkg-checkbuilddeps dpkg-checkbuilddeps: error: Unmet build dependencies: python3-ipwhois python3-dnslib python3-aiohttp python3-aiodns python3-psycopg2 python3-tldextract kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ sudo apt -y build-dep . [...] kali@kali:~/kali/packages/finalrecon$ Our package has been built and dependencies have been installed. Its now time to finally install FinalRecon: kali@kali:~/kali/packages/finalrecon$ sudo dpkg -i ~/kali/build-area/finalrecon_*.deb [...] kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ dpkg -l | grep final ii finalrecon 0.0~git20201107.0d41eb6-0kali1 all A fast and simple python script for web reconnaissance kali@kali:~/kali/packages/finalrecon$ We have successfully managed to build FinalRecon as a package! Let’s test to make sure it works: kali@kali:~/kali/packages/finalrecon$ finalrecon usage: finalrecon [-h] [--headers] [--sslinfo] [--whois] [--crawl] [--dns] [--sub] [--trace] [--dir] [--ps] [--full] [-t T] [-T T] [-w W] [-r] [-s] [-sp SP] [-d D] [-e E] [-m M] [-p P] [-tt TT] [-o O] url finalrecon: error: the following arguments are required: url kali@kali:~/kali/packages/finalrecon$ Saving Our Work At this point, we can save the work we have put in: kali@kali:~/kali/packages/finalrecon$ git add debian/ kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git commit -m "Initial release" [kali/master d1c9f75] Initial release 12 files changed, 169 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/control create mode 100644 debian/copyright create mode 100644 debian/finalrecon.install create mode 100644 debian/links create mode 100644 debian/patches/disable-requirements-check.patch create mode 100644 debian/patches/disable-ver_check.patch create mode 100644 debian/patches/series create mode 100755 debian/rules create mode 100644 debian/source/format create mode 100644 debian/tests/control create mode 100644 debian/watch kali@kali:~/kali/packages/finalrecon$ kali@kali:~/kali/packages/finalrecon$ git status On branch kali/master nothing to commit, working tree clean kali@kali:~/kali/packages/finalrecon$ We can now finish up the packaging by putting in a request on the Kali Linux bug tracker for these packages to be added! Message From Kali Team
8585
dbpedia
2
29
https://maclabs.jazzace.ca/2018/10/29/putting-the-pkg-in-autopkg.html
en
Anthony’s Mac Labs Blog
[]
[]
[]
[ "" ]
null
[]
2018-10-29T00:00:00
Anthony’s Mac Labs Blog : Anthony Reimer’s blog for Mac Admins that shares what he’s learned recently
null
Posted 2018 October 29 For many people, AutoPkg could have easily been named AutoUpdate in recognition of its unparalleled ability to fetch updates for the apps you deploy. Yes, it can create packages as well, but perhaps the only packages you have seen being built by AutoPkg are those that are a simple wrapper around a downloaded app.[1] With the AppPkgCreator processor (introduced with AutoPkg 1.0.0), creating such a recipe is even simpler than before. But AutoPkg has always been about automating “the tasks one would normally perform manually to prepare third-party software for mass deployment.”[2] What if you need to modify something or add to it before deploying? If it is the same process every time, why not have AutoPkg do it for you? Here are three examples of recipes that do just that, explained in detail. FirefoxAutoconfig The main model I used when learning about AutoPkg’s ability to build custom packages was the FirefoxAutoconfig.pkg recipe written by Greg Neagle. Firefox allows administrators to configure their deployment of Firefox, usually to preset or lock certain preferences. That ability can be very useful in certain deployments (in my case, a Lab deployment where we want to control updates, block password saving, and the like). While we will soon be able to establish such settings via a configuration profile,[3] the only method supported until then requires making a configuration file and stuffing it in the application bundle. Historically, the most common tool used to create a supported configuration file has been the CCK (later, the CCK2) Firefox extension written by Mike Kaply. Deploying the configuration payload manually could be tedious; that’s where Greg’s FirefoxAutoconfig.pkg recipe comes in. It downloads a copy of Firefox, copies the app from the disk image to the cache, adds the autoconfig file (generated by CCK2) to the place within the application bundle that Firefox requires, and then creates an installer package of this modified app. All the recipe user needs to do is place the autoconfig.zip file they generated with CCK2 in the same directory as the recipe (usually, the RecipeOverrides directory within your AutoPkg setup); AutoPkg will do the rest. You can see how having a Firefox installer always available pre-configured with your settings could save a lot of work for an admin, especially considering Firefox’s regular release cycle. Let’s take a closer look at the recipe. The recipe runs the Firefox.download recipe from the main AutoPkg recipes repo and then adds the following processor steps: AppDmgVersioner [grabs the version number of the downloaded app for later use] PkgRootCreator [creates a place in the cache where the files being deployed will be added] Copier [copies the Firefox app from the disk image into the Applications folder inside the package root just created] Unarchiver [copies the configuration files inside the autoconfig.zip archive to where Firefox requires them inside the app bundle] PkgCreator [creates a package based on the content in the package root, named with a custom name and the app’s version number (acquired earlier)] There are a few interesting details to consider here. With PkgRootCreator, a directory for the installer payload is created inside the cache in a folder named with the value of the NAME input key; this is the normal convention. When you create the package root, you normally create all the directories you need for the payload you are going to deliver, since the package needs to mimic the system directory layout. Thus, the recipe creates an Applications directory at the root of the package with the same read-write-execute permissions (expressed as the octal number 0775) as you would usually see on that folder. You could create additional directories by specifying more key+string pairs within the pkgdirs dictionary. The use of Unarchiver is both unusual and creative. In many cases, you would use a Copier processor here, but since the contents of the autoconfig.zip archive (once expanded) is what we need placed inside Contents/Resources within the application bundle, Greg chose to unarchive directly to that specific destination. Finally, PkgCreator contains the instructions for AutoPkg to use when building the package (you can find out more about what each of the settings means in the documentation found within the source code of autopkgserver). The most interesting setting for us is the chown section. In general, any directory you created with the PkgRootCreator processor should have owner and group set within this processor.[4] In this case, root and admin are the normal owner and group for the Applications folder, so they have been specified here. Just a short sidebar on Firefox here: Mozilla created a new JSON-based configuration method that works with Firefox 60 or later (and is supported cross-platform). Neil Martin adapted Greg Neagle’s recipe to support this new method. If you examine the recipe (FirefoxPolicies.pkg in the neilmartin83-recipes repo), the only significant difference is the use of the Copier processor in the fourth processor step. It also requires the user to place the payload to add to Firefox within a particular location in your AutoPkg cache. VueScanLicenced The reason I wrote the VueScanLicenced.pkg recipe is not because I need to mass deploy VueScan—currently, I’m deploying it on one Mac. It’s because by doing so I can eliminate a manual process that is not going to change any time soon (due to the vendor’s free update policy for the Pro licence). Writing the recipe functions as both automation and documentation, as you will see in a moment. Just as with the FirefoxAutoconfig.pkg recipe we just looked at, this recipe uses an existing download recipe for VueScan as the parent. It then adds the following processor steps, which are also very similar: AppDmgVersioner [grabs the version number of the downloaded app for later use] PkgRootCreator [creates a place in the cache where the files being deployed will be added, including the directories needed for the files] Copier [copies the VueScan app from the disk image into the Applications folder inside the package root just created] Copier [copies the VueScan licence information file from the user-specified location on the local disk to the place that VueScan requires it] PkgCreator [creates a package based on the content in the package root, named with a custom name and the app’s version number (acquired earlier)] The main difference between this recipe and FirefoxAutoconfig is that we create three directories when creating the package root, since we will have a file not just in /Applications, but also in /Users/Shared (we have to create /Users before we can create /Users/Shared). As I mentioned earlier, this is done by simply listing key+string pairs (path plus permissions) in the pkgdirs dictionary. This also means that when we build the package, we need to set the ownership for all three directories, which is done using an array in the PkgCreator processor step (pkg_request > chown > dict for each directory). Finally, we use the Copier processor to place the licence information file in the location required for VueScan to recognize it. In this case, I’ve documented in the recipe Description how to generate the necessary file, the easiest way being to temporarily authorize VueScan on the Mac (or VM) that runs AutoPkg such that the licence information file will be placed in the default location. But I have also documented what the contents of that licence file are, since it is a simple text file that you can construct and locate anywhere on the local system. Since the Copier processor requires that we specify the full path for the copied text file, this allows us the flexibility to let the user do this. In the FirefoxAutoconfig example, that recipe could just as easily been written to require that the user specify the path to the autoconfig.zip file as an input variable, but is probably cleaner as it was written. In the case of my recipe, since the filename of the licence information file that VueScan requires starts with a dot (and thus makes the file invisible in the Finder by default), I decided to allow the recipe user to make that a visible file if they wished, hence the RC_FILE input key. The (second) Copier processor simply renames the file to the required value when placing it in /Users/Shared within the package root. To be clear, because this is a process I only need to do once a year, I could have suffered along doing it manually or I could have used a dedicated package creation tool (e.g., WhiteBox Packages, Jamf Composer, munkipkg, The Luggage) or even the pkgbuild tool in macOS (which all the other tools leverage to do their work). But since I was comfortable with how AutoPkg works, I took advantage of its ability to build arbitrary packages to make my future deployments easier. AbletonLive Recently, I needed to deploy Ableton Live 10 Lite. Conveniently, Tim Sutton had already written a download recipe for version 9 that could easily be adapted for the current version. But I noticed that the download recipe did not include code signature verification. So since Tim has changed jobs since he wrote that recipe and is unlikely to update (or want to maintain) it, I decided I would adapt his download recipe for version 10 and release it in my repo, adding further child recipes to support my deployment needs. Let’s examine the download and pkg recipes in my repo. Here are the processor steps I use to create the package installer for Live: URLTextSearcher [grabs the latest version number from a web page on the Ableton web site] URLDownloader [downloads the latest installer] EndOfCheckPhase [allows anyone running the recipe with the --check option to gracefully exit] CodeSignatureVerifier [verifies that the app downloaded is codesigned] FileFinder [grabs the name of the app from the disk image for use in subsequent steps] Copier [copies the app from the disk image to a location in the cache] PathDeleter [eliminates the SoundCloud extension from the app bundle – more on this in a moment] AppPkgCreator [wraps the edited app in a pkg installer] StopProcessingIf […we didn't need to build a new package] PathDeleter [delete extra copies of the (rather large) app created as a part of the automation] Let me highlight some of the details, both of Tim’s original recipe and my extensions thereof. Ableton provides a page on their web site that lists the latest minor version of their two most recent versions. They also use a very uniform download URL scheme, such that, if you know the version number, you can determine the URL for the download. So Tim’s original recipe leveraged that and grabbed the version number from the web page.[5] His recipe also gave the user the option to download the 32-bit or 64-bit version. In my update, I eliminated the 32-bit option (since it won’t be useful for very much longer) and added the ability to specify which major version you wanted. In many ways, I’m trying to future-proof the recipe, such that when Version 11 is released, you might only need to change your override to make it work. The FileFinder-Copier-PathDeleter processor sequence was necessary because of an issue with the app bundle. When I simply used AppPkgCreator to create the package, it failed with the following error: pkgbuild failed with exit code -6: [/BuildRoot/Library/Caches/com.apple.xbs/Sources/Bom/Bom-194.2/FSObject/BOMFSOArchInfo.c:477] file OAuth2Client is corrupt: slice for <cputype 117440512, subtype 50331648> extends beyond length of file. (218168 > 218145) The key in that message is that pkgbuild, the command line tool behind AutoPkg’s ability to build packages, found a problem building the package. I re-downloaded just in case, and it was not a case of a failed or corrupted download. Because the problem was with pkgbuild, that meant any other tool that would try to build a package from that app (like the ones I mentioned earlier) would have the same problem. In searching the #autopkg channel on the MacAdmins Slack, I saw that Stephen Warneford-Bygrave also had this problem back with Version 9 of Live and found that deleting the SoundCloud extension in the app bundle allowed the package to be built. This is what led to the steps before the AppPkgCreator processor. The FileFinder processor was new to me. It helped solve a problem caused by the fact that the recipe can download any of the five variants of Live (and in one of two major versions). I want the app that we deploy to have the same name as the one that we downloaded, but the Copier processor requires that you supply the app name for the target; you can’t just do like you might using cp on the command line. In a recipe that had fewer options, I could have just hard coded the matching value. In this instance, FileFinder can determine that name for me. Since there is only one app in the disk image downloaded, I can use FileFinder to find the app name by specifying a simple *.app globbing match within the disk image; it will return the filename of the app as the variable dmg_found_filename, which I can then use in the Copier processor. Once I’ve got the app copied, I can remove the SoundCloud extension using the PathDeleter processor. When I make the package installer from the modified app, I simplified things by using the AppPkgCreator processor. More commonly, you would use this processor on an app still contained in a disk image, but it works on any app you can point to, including ones extracted from an archive or already copied from a disk image (like we did in this case). I would not normally include the last two processor steps, but Ableton Live is close to 2 GB in size, even in the “Lite” edition. The extra copies of the app generated by the recipe (particularly on a non-APFS drive) could start filling your storage quickly. So I chose to “clean up after myself” and delete both the app I copied (and then modified) and the scratch area that AppPkgCreator used to make another copy of the modified app for bundling into the package. The StopProcessingIf processor is the only processor in AutoPkg that is conditional. The reason I used it here is because I chose to be aggressive with what I cleaned up with PathDeleter; when the recipe is run and there is no new package to build, the payload directory is not created, so trying to delete it throws an error. Thus, StopProcessingIf causes this recipe to stop if there is no new package to build. Long after I finished working on the pkg recipe, I took a close look at the MoofIT pkg recipe for Live 9. This takes an approach more like the first recipe we looked at, using the traditional package-building processors. It has the benefit of not using as much storage space (hence the lack of cleanup steps) but has to do the work that AppPkgCreator provides for free. You can make good arguments for either approach. More Reading/Watching If you decide that you want to learn more about using AutoPkg as a general purpose packaging tool, Greg Neagle wrote a blog post on the topic, complete with a few examples in a GitHub repo of his. Those examples are more of the type where you already have the files you need (because you probably created them) and want to use AutoPkg to turn them into an installer. Also, since two of our examples messed with an application bundle, I feel it is my Mac Admins duty to remind you that you can do bad things with AutoPkg. Elliot Jordan has done a few talks on this very topic, most of which have been recorded. I recommend you check out his presentation from the 2017 Mac Admins Conference at Penn State (slides or video), as it includes information about the trust verification features added in AutoPkg 1.0.0. I hope the examples I provided in this blog post help you leverage more of the power of AutoPkg to automate things you would otherwise need to do manually.
8585
dbpedia
3
6
https://macintoshguy.wordpress.com/2017/05/15/bash-completion-for-autopkg/
en
bash completion for autopkg
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://i0.wp.com/www.linkedin.com/img/webpromo/btn_viewmy_160x33.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2017-05-15T00:00:00
Over the weekend I was feeling a little bored so I decided to try my hand at writing a shell script to add custom completion for autopkg to bash.
en
https://s1.wp.com/i/favicon.ico
The Macintosh Guy
https://macintoshguy.wordpress.com/2017/05/15/bash-completion-for-autopkg/
Over the weekend I was feeling a little bored so I decided to try my hand at writing a shell script to add custom completion for autopkg to bash. (tl;dr – the script is on GitHub.) I found an example for the zsh shell which lacked a couple of features and I spent some time examining the script for brew so I wasn’t totally in the dark. There are a number of tutorials available for writing them but none are particularly detailed so that wasn’t much help. Writing Shell Scripts The first thing I should say is that I find writing shell scripts totally different to writing for any other language. I probably write shell scripts incredibly old school, shell and C were the two languages I was paid to write way back in the 1980’s. It feels like coming home. In shell I write tiny functions. The final script was 224 lines long and contains 22 functions, around half a dozen either do nothing or contain a single call to another function. Apart from the main function none is longer than ten lines of code. Even the main function is quite simple, though it runs to 40 lines or so half is a single case statement with a line for each of the commands in autopkg. Let me show you. Here’s some of those tiny functions: _autopkg_processor_info() { local cur="${COMP_WORDS[COMP_CWORD]}" case "$cur" in --*) __autopkgcomp "--help --recipe= --search-dir= --override-dir= --name=" return ;; esac } _autopkg_repo-add() { return } _autopkg_repo-delete() { _autopkg_comp_repos } In the first one we have a single case statement. This could have been an if but it looks much neater and clearer written as a case statement. You might wonder why I bothered writing the next two functions at all. It’s done in the name of consistency. Down in the main function we have the case statement : case "$cmd" in audit) _autopkg_audit ;; help) _autopkg_help ;; info) _autopkg_info ;; list-processors) _autopkg_list_processors ;; list-recipes) _autopkg_list_recipes ;; make-override) _autopkg_make_override ;; processor-info) _autopkg_processor_info ;; repo-add) _autopkg_repo-add ;; repo-delete) _autopkg_delete ;; repo-list) _autopkg_list_processors ;; repo-update) _autopkg_repo-update ;; run) _autopkg_run ;; search) _autopkg_search ;; update-trust-info) _autopkg_update_trust_info ;; verify-trust-info) _autopkg_verify_trust_info ;; version) _autopkg_version ;; install) _autopkg_install ;; *) esac This statement is one line for each possible autopkg command. By creating those “useless” functions it makes this case statement look clean and clear. It also makes it obvious what we have to do if autopkg adds a new command – add a line in this case statement, write a new function and put the new command into a string called “opts”. By using those tiny functions we’ve made our code much cleaner. The other complexity in shell scripting is that so much of what you write ends up being other tools or sometimes complex shell builtins. Writing completions is a classic example, there are two special builtins, complete and compgen which you will need to understand. Then I also had to use grep and expr. The grep command was simple but the expr comand is a little gnarly: local repos="$(for r in `autopkg repo-list`; do expr $r : '.*(\(.*\))'; done)" It’s not really the fault of expr, that’s a regular expression after the : and they’re often gnarly but it does show that you need to be familiar with a wide range of small tools for shell programming. By the way, that regular expression takes a string in the form <directorypath> (<URL) and returns the URL without the parentheses. It will even cope with parentheses in the directory path since expr has “greedy expansion” and that first .* in the expression will grab everything up to the last open parenthesis character. This is, of course, exactly the sort of detail you have to be all over when writing shell scripts. That single line probably took me more than twenty times longer to write than any other line of code in the script. Given that I wanted to check that it would cope with any possible directory path and URL I actually wrote another shell script that took it’s first argument and ran it through the expr command. My final step was to write a file containing 16 possible complications in the file path and a half dozen possible complications in the URL and loop through the file running the test script on each line. I had nothing to worry about, but it was nice to be sure. I was so happy when I finished that line I actually posted a status update to Facebook and had a celebratory bourbon.
8585
dbpedia
0
26
https://carpentries-incubator.github.io/python_packaging/instructor/03-building-and-installing.html
en
Python Packaging: Building and Installing Packages using setuptools
https://carpentries-incu…avicon-32x32.png
https://carpentries-incu…avicon-32x32.png
[ "https://carpentries-incubator.github.io/python_packaging/assets/images/incubator-logo.svg", "https://carpentries-incubator.github.io/python_packaging/assets/images/incubator-logo-sm.svg" ]
[]
[]
[ "" ]
null
[]
2023-07-04T00:00:00
en
../apple-touch-icon.png
null
Building and Installing Packages using setuptools Last updated on 2024-04-16 | Edit this page Estimated time: 20 minutes Introduction In the first lesson, we showed how to use the PYTHONPATH environment variable to enable us to import our modules and packages from anywhere on our system. There are a few disadvantages to this method: If we have two different versions of a package on our system at once, it can be tedious to manually update PYTHONPATH whenever we want to switch between them. If we have multiple Python environments on our system (using tools such as venv or conda), setting PYTHONPATH will affect all of them. This can lead to unexpected dependency conflicts that can be very hard to debug. If we share our software with others and require them to update their own PYTHONPATH, they will need to install any requirements for our package separately, which can be error prone. It would be preferable if we could install our package using pip, the same way that we would normally install external Python packages. However, if we enter the top level directory of our project and try the following: BASH $ cd /path/to/my/workspace/epi_models $ python3 -m pip install . We get the following error: OUTPUT ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. In order to make our project installable, we need to add the either the file pyproject.toml or setup.py to our project. For modern Python projects, it is recommended to write only pyproject.toml. This was introduced by PEP 517, PEP 518 and PEP 621 as a standard way to define a Python project, and all tools that build, install, and publish Python packages are expected to use it. What issetup.py? setup.py serves a similar role to pyproject.toml, but it is no longer recommended for use. The lesson on the history of build tools explains how it works and why the community has moved away from it. By making our project pip-installable, we’ll also make it very easy to publish our packages on public repositories – this will be covered in our lesson on package publishing. After publishing our work, our users will be able to download and install our package using pip from any machine of their chocie! To begin, we’ll introduce the concept of a ‘Python environment’, and how these can help us manage our workflows. Managing Python Environments When working with Python, it can sometimes be beneficial to install packages to an isolated environment instead of installing them globally. Usually, this is done to manage competing dependencies: Project B might depend upon Project A, but may have been written to use version 1.0. Project C might also depend upon Project A, but may instead only work with version 2.0. If we install Project A globally and choose version 2.0, then Project B will not work. Similarly, if we choose version 1.0, Project C will not work. A good way to handle these sorts of conflicts is to instead use virtual environments for each project. A number of tools have been developed to manage virtual environments, such as venv, which is a standard built-in Python tool, and conda, which is a powerful third-party tool. We’ll focus on venv here, but both tools work similarly. Callout You can pip install packages into a conda virtual environment, so much of the advice in this lesson will still apply if you prefer to use conda. If we’re using Linux, we can find which Python environment we’re using by calling: BASH $ which python3 If we’re using the default system environment, the result is something like the following: OUTPUT /usr/bin/python3 To create a new virtual environment using venv, we can call: BASH $ python3 -m venv /path/to/my/env This will create a new directory at the location /path/to/my/env. Note that this can be a relative path, so just calling python3 -m venv myenv will create the virtual environment in the directory ./myenv. We can then ‘activate’ the virtual environment using: BASH $ source /path/to/my/env/bin/activate Checking which Python we’re running should now give a different result: BASH $ which python3 OUTPUT /path/to/my/env/bin/python3 If we now install a new package, it will be installed within our new virtual environment instead of being installed to the system libraries. For example: BASH $ python3 -m pip install numpy We should now find NumPy installed at the following location (note that the Python version may not match yours): BASH $ ls /path/to/my/env/lib/python3.8/site-packages/numpy site-packages is a standard location to store installed Python packages. We can see this by analysing Python’s import path: PYTHON >>> import sys >>> print(sys.path) ['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/path/to/my/env/lib/python3.8/site-packages'] If we no longer wish to use this virtual environment, we can return to the system environment by calling: BASH $ deactivate Virtual environments are very useful when we’re testing our code, as they allow us to create a fresh Python environment without any of the installed packages we normally use in our work. This will be important later when we add dependencies to our package, as this allows us to test whether our users will be able to install and run our code properly using a fresh environment. An Overview of TOML files pyproject.toml is a TOML file, which stands for ‘Tom’s Obvious Minimal Langauge’ (named for its developer, Thomas Preston-Werner, who cofounded GitHub). There are many configuration file formats in common usage, such as YAML, JSON, and INI, but the Python community chose TOML as it provides some benefits over the competition: Designed to be human writable and human readable. Can map unambiguously to a hash table (a dict in Python). It has a formal specification, so has an unambiguous set of rules. A TOML file contains a series of key = value pairs, which may be grouped into sections using a header enclosed in square brackets, such as [section name]. The values are typed, unlike some other formats where all values are strings. The available types are strings, integers, floats, booleans, and dates. It is possible to store lists of values in arrays, or store a series of key-value pairs in tables. For example: TOML # file: mytoml.toml int_val = 5 float_val = 0.5 string_val = "hello world" bool_val = true date_val = 2023-01-01T08:00:00 array = [1, 2, 3] inline_table = {key = "value"} # Section headings allow us to define tables over # multiple lines [header_table] name = "John" dob = 2002-03-05 # We can define subtables using dot notation [header_table.subtable] foo = "bar" We can read this using the toml library in Python: BASH $ python3 -m pip install toml PYTHON >>> import toml >>> with open("mytoml.toml", "r") as f: ... data = toml.load(f) >>> print(data) The result is a dictionary object, with TOML types converted to their corresponding Python types: { 'int_val': 5, 'float_val': 0.5, 'string_val': 'hello world', 'bool_val': True, 'date_val': datetime.datetime(2023, 1, 1, 8, 0), 'array': [1, 2, 3], 'inline_table': {'key': 'value'}, 'header_table': { 'name': 'John', 'dob': datetime.date(2002, 3, 5), 'subtable': { 'foo': 'bar' } } } Callout Since Python 3.11, tomllib is part of Python’s standard library. It works the same as above, but you’ll need to import tomllib instead of toml. Installing our package with pyproject.toml First, we will show how to write a relatively minimal pyproject.toml file so that we can install our projects using pip. We will then cover some additional tricks that can be achieved with this file: Use alternative directory structures Include any data files needed by our code Generate an executable so that our scripts can be run directly from the command line Configure our development tools. To make our package pip-installable, we should add the file pyproject.toml to the top-level epi_models directory: 📁 epi_models | |____📜 pyproject.toml |____📦 epi_models | |____📜 __init__.py |____📜 __main__.py | |____📁 models | | | |____📜 __init__.py | |____📜 SIR.py | |____📜 SEIR.py | |____📜 SIS.py | |____📜 utils.py | |____📁 plotting | |____📜 __init__.py |____📜 plot_SIR.py |____📜 plot_SEIR.py |____📜 plot_SIS.py The first section in our pyproject.toml file should specify which build system we wish to use, and additionally specify any version requirements for packages used to build our code. This is necessary to avoid a circular dependecy problem that occurred with earlier Python build systems, in which the user had to run an install program to determine the project’s dependencies, but needed to already have the correct build tool installed to run the install program – see the lesson on historical build tools for more detail. We will choose to use setuptools, which requires the following: requires is set to a list of strings, each of which names a dependency of the build system and (optionally) its minimum version. This uses the same version syntax as pip. build-backend is set to a sub-module of setuptools which implements the PEP 517 build interface. With our build system determined, we can add some metadata that defines our project. At a minimum, we should specify the name of the package, its version, and our dependencies: That’s all we need! We’ll discuss versioning in our lesson on publishing. With this done, we can install our package using: BASH $ python3 -m pip install . This will automatically download and install our dependencies, and our package will be importable regardless of which directory we’re in. The installed package can be found in the directory /path/to/my/env/lib/python3.8/site-packages/epi_models along with a new directory, epi_models-0.1.0.dist-info, which simply contains metadata describing our project. If we look inside our installed package, we’ll see that our files have been copied, and there is also a __pycache__ directory: BASH $ ls /path/to/my/env/lib/python3.8/site-packages/epi_models __init__.py __main__.py models plotting __pycache__ The __pycache__ directory contains Python bytecode, which is a lower-level version of Python that is understood by the Python Virtual Machine (PVM). All of our Python code is converted to bytecode when it is run or imported, and by pre-compiling our package it can be imported much faster. If we look into the directories models and plotting, we’ll see those have been compiled to bytecode too. If we wish to uninstall, we may call: BASH $ python3 -m pip uninstall epi_models We can also create an ‘editable install’, in which any changes we make to our code are instantly recognised by any codes importing it – this mode can be very useful when developing our code, especially when working on documentation or tests. BASH $ python3 -m pip install -e . $ # Or... $ python3 -m pip install --editable . Callout The ability to create editable installs from a pyproject.toml-only build was standardised in PEP 660, and only recently implemented in pip. You may need to upgrade to use this feature: BASH $ python3 -m pip install --upgrade pip There are many other options we can add to our pyproject.toml to better describe our project. PEP 621 defines a minimum list of possible metadata that all build tools should support, so we’ll stick to that list. Each build tool will also define synonyms for some metadata entries, and additional tool-specific metadata. Some of the recommended core metadata keys are described below: TOML # file: pyproject.toml [project] # name: String, REQUIRED name = "my_project" # version: String, REQUIRED # Should follow PEP 440 rules # Can be provided dynamically, see the lesson on publishing version = "1.2.3" # description: String # A simple summary of the project description = "My wonderful Python package" # readme: String # Full description of the project. # Should be the path to your README file, relative to pyproject.toml readme = "README.md" # requires-python: String # The Python version required by the project requires-python = ">=3.8" # license: Table # The license of your project. # Can be provided as a file or a text description. # Discussed in the lesson on publishing license = {file = "LICENSE.md"} # or... license = {text = "BDS 3-Clause License"} # authors: Array of Tables # Can also be called 'maintainers'. # Each entry can have a name and/or an email authors = [ {name = "My Name", email = "my.email@email.net"}, {name = "My Friend", email = "their.email@email.net"}, ] # urls: Table # Should describe where to find useful info for your project urls = {source = "github.com/MyProfile/my_project", documentation = "my_project.readthedocs.io/en/latest"} # dependencies: Array of Strings # A list of requirements for our package dependencies = [ "numpy >= 1.20", "pyyaml", ] Note that some of the longer tables in our TOML file can be written using non-inline tables if it improved readability: TOML [project.urls] Source = "github.com/MyProfile/my_project", Documentation = "my_project.readthedocs.io/en/latest", Alternative Directory Structures setuptools provides some additional tools to help us install our package if they use a different layout to the ‘flat’ layout we covered so far. A popular alternative layout is the src-layout: 📁 epi_models | |____📜 pyproject.toml |____📁 src | |____📦 epi_models | |____📜 __init__.py |____📜 __main__.py |____📁 models |____📁 plotting The main benefit of this choice is that setuptools won’t accidentally bundle any utility modules stored in the top-level directory with our package. It can also be neater when one project contains multiple packages. Note that directories and files with special names are excluded by default regardless of which layout we choose, such as test/, docs/, and setup.py. We can also disable automatic package discovery and explicitly list the packages we wish to install: TOML # file: pyproject.toml [tool.setuptools] packages = ["my_package", "my_other_package"] Note that this is not part of the PEP 621 standard, and therefore instead of being listed under the [project] header, it is a method specific to setuptools. Finally, we may set up custom package discovery: TOML # file: pyproject.toml [tool.setuptools.packages.find] where = ["my_directory"] include = ["my_package", "my_other_package"] exclude = ["my_package.tests*"] However, for ease of use, it is recommended to stick to either the flat layout or the src layout. Package Data Sometimes our code requires some non-.py files in order to function properly, but these would not be picked up by automatic package discovery. For example, the project may store default input data in .json files. These could be included with your package by adding the following to pyproject.toml: TOML # file: pyproject.toml [tool.setuptools.package-data] epi_models = ["*.json"] Note that this would grab only .json files in the top-level directory of our project. To include data files from all packages and sub-packages, we should instead write: TOML # file: pyproject.toml [tool.setuptools.package-data] "*" = ["*.json"] Installing Scripts If our package contains any scripts and/or a __main__.py file, we can run those from anywhere on our system after installation: BASH $ python3 -m epi_models $ python3 -m epi_models.plotting.plot_SIR With a little extra work, we can also install a simplified interface that doesn’t require python3 -m in front. This is how tools like pip can be invoked using two possible methods: BASH $ python3 -m pip # Invoke with python $ pip # Invoke via console-scripts entrypoint This can be achieved by adding a table scripts under the [project] header: TOML # file: pyproject.toml [project] scripts = {epi_models = "epi_models.__main__:main"} # Alternative form: [project.scripts] epi_models = "epi_models.__main__:main" This syntax means that we should create a console script epi_models, and that running it should call the function main() from the file epi_models/__main__.py. This will require a slight modification to our __main__.py file. All that’s necessary is to move everything from the script into a function main() that takes no arguments, and then to call main() at the bottom of the script: PYTHON # file: main.py def main(): # Put the __main__ script here... main() This will allow us to run our package as a script directly from the command line BASH $ python3 -m pip install . $ epi_models --help Note that we’ll still be able to run our code using the longer form: BASH $ python3 -m epi_models --help If we have multiple scripts in our package, these can all be given invidual console scripts. However, these will also need to have a function name as an entry point: TOML # file: pyproject.toml [project.scripts] epi_models = "epi_models.__main__:main" epi_models_sir = "epi_models.plotting.plot_SIR:main" So how do these scripts work? When we activate a virtual environment, a new entry is added to our PATH environment variable linking to /path/to/my/env/bin/: BASH PATH = "/path/to/my/env/bin:${PATH}" After installing our console scripts, we can find a new file in this directory with the name we assigned to it. For example, /path/to/my/env/bin/epi_models: PYTHON #!/path/to/my/env/bin/python3 # -*- coding: utf-8 -*- import re import sys from epi_models.__main__ import main if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) Installing our project has automatically created a new Python file that can be run as a command line script due to the hash-bang (#!) on the top line, and all it does it import our main function and run it. As it’s contained with the bin/ directory of our Python environment, it’s available for use as long we’re using that environment, but as soon as we call deactivate, it is removed from our PATH. Setting Dependency Versions Earlier, when setting dependencies in our pyproject.toml, we chose to specify a minimum requirement for numpy, but not for pyyaml: This indicates that pip should install any version of numpy greater than 1.20, but that any version of pyyaml will do. If our installed numpy version is less than 1.20, or if it isn’t installed at all, pip will upgrade to the latest version that’s compatible with the rest of our installed packages and our Python version. We’ll cover software versioning in more detail in the lesson on publishing, but now we’ll simply cover some ways to specify which software versions we need: TOML "numpy >= 1.20" # Must be at least 1.20 "numpy > 1.20" # Must be greater than 1.20 "numpy == 1.20" # Must be exactly 1.20 "numpy <= 1.20" # Must be 1.20 at most "numpy < 1.20" # Must be less than 1.20 "numpy == 1.*" # Must be any version 1 If we separate our clauses with commas, we can combine these requirements: TOML # At least 1.20, less than 1.22, and not the release 1.21.3 "numpy => 1.20, < 1.22, != 1.21.3" A useful shorthand is the ‘compatible release’ clause: TOML "numpy ~= 1.20" # Must be a release compatible with 1.20 This is equivalent to: TOML "numpy >= 1.20, == 1.*" That is, we require anything which is version 1, provided it’s greater than 1.20. This would include version 1.25, but exlude version 2.0. We’ll come back to this later when we discuss publishing. Optional Dependencies Sometimes we might have dependencies that only make sense for certain kind of user. For example, a developer of our library might need any libraries we use to run unit tests or build documentation, but an end user would not. These can be added as optional-dependencies: These dependencies can be installed by adding the name of each optional dependency group in square brackets after telling pip what we want to install: BASH $ pip install .[test] # Include testing dependencies $ pip install .[doc] # Include documentation dependencies $ pip install .[test,doc] # Include all dependencies
8585
dbpedia
3
24
https://stackoverflow.com/questions/46743068/python-auto-import-extension-for-vscode
en
Python auto-import extension for VSCode
https://cdn.sstatic.net/…g?v=73d79a89bded
https://cdn.sstatic.net/…g?v=73d79a89bded
[ "https://www.gravatar.com/avatar/256e13236fd3f35f5d3759695c603ec4?s=64&d=identicon&r=PG&f=y&so-version=2", "https://lh6.googleusercontent.com/-HD3_jNTPhzw/AAAAAAAAAAI/AAAAAAAADnQ/fE4I1GMR0ZA/photo.jpg?sz=64", "https://www.gravatar.com/avatar/99dc7b3a068fc5b2fcb80cef933102ae?s=64&d=identicon&r=PG", "https://www.gravatar.com/avatar/d7e4842aa6a1878b1672ad85db70d5be?s=64&d=identicon&r=PG&f=y&so-version=2", "https://graph.facebook.com/831905787/picture?type=large", "https://www.gravatar.com/avatar/8d88e68408b35e03b8c7fdb05c36dd7d?s=64&d=identicon&r=PG", "https://i.sstatic.net/a2GYi.png?s=64", "https://www.gravatar.com/avatar/99dc7b3a068fc5b2fcb80cef933102ae?s=64&d=identicon&r=PG", "https://i.sstatic.net/IaNlR.png", "https://i.sstatic.net/YLFLS.png", "https://i.sstatic.net/ovKvp.png", "https://i.sstatic.net/An3gf.jpg?s=64", "https://lh3.googleusercontent.com/-zpFS-Lora0M/AAAAAAAAAAI/AAAAAAAAAW8/x6ZpQUf6ldw/photo.jpg?sz=64", "https://i.sstatic.net/e3xsF.jpg?s=64", "https://i.sstatic.net/ocDYW.png?s=64", "https://www.gravatar.com/avatar/d87dde36373e3b8501812c132e1ba211?s=64&d=identicon&r=PG&f=y&so-version=2", "https://stackoverflow.com/posts/46743068/ivc/3f38?prg=4c3ba7f1-955e-47e1-957a-77adf457d334" ]
[]
[]
[ "" ]
null
[]
2017-10-14T09:39:34
Is there a Python auto-import extension/plugin available for VSCode? By auto-import I mean automatically importing of Python modules, so if you type sys.argv then it should automatically import the...
en
https://cdn.sstatic.net/Sites/stackoverflow/Img/favicon.ico?v=ec617d715196
Stack Overflow
https://stackoverflow.com/questions/46743068/python-auto-import-extension-for-vscode
(Updated answer as of August 2023) These did it for me: "python.analysis.indexing": true, "python.analysis.autoImportCompletions": true, If that is slowing down your computer too much because it's indexing too many files, then look into specifying patterns and depths of directories to include in the indexing using "python.analysis.packageIndexDepths", or using "python.analysis.exclude". Note that I am using Pylance (currently the default, as of January 2023). Check out the VSCode python settings reference for more info on each of those settings. Edit August 2023: removed "python.analysis.autoImportUserSymbols" because @YellowStrawHatter pointed out that no longer exists. No, but it will soon be a part of vscode-python: https://github.com/Microsoft/vscode-python/pull/636 EDIT: See answer by @Eric, who built such an extension. EDIT 2: See answer by @Eyal Levin, mentioning such an extension (Pylance). From https://github.com/microsoft/python-language-server/issues/19#issuecomment-587303061: For those who wonder how to trigger auto-importing as I did, here are the steps. Enable Microsoft Python Language Server by removing the check of Python: Jedi Enabled in your settings. Reload the VSCode window. Hover your mouse over the variable that you want to import, and click Quick fix... For the last step, if it shows No quick fixes available or Checking for quick fixes, you may need to wait for a while until the extension has finished code analysis. It is also possible to set a shortcut that triggers a quick fix. You can set the setting(true) below to settings.json for auto import. *Pylance extension installed automatically when Python extension is installed has the setting(true) below which is false by default and you can see my answer explaning how to open settings.json: // "settings.json" { ... "python.analysis.autoImportCompletions": true } Then, it shows all matched attributes and modules as shown below: Then, pressing Enter can automatically import what you select as shown below: In addition, if you don't set the setting(true) below to settings.json for auto import: // "settings.json" { ... // "python.analysis.autoImportCompletions": true } Then, it only shows below:
8585
dbpedia
2
87
https://opensource-heroes.com/u/timsutton
en
Discover @timsutton Open Source projects
https://opensource-heroes.com/og/record?login=timsutton
https://opensource-heroes.com/og/record?login=timsutton
[ "https://opensource-heroes.com/assets/logo-73bf46ac97f6e1342a34af7b069727172b3a1034.svg", "https://opensource-heroes.com/assets/logo-white-35f6cd5ea1d64625cd6f1e428a30632dfdba2c63.svg", "https://opensource-heroes.com/assets/github-logo-86050eddecfe1cfbd92943ca8476fe54d44aec4f.png", "https://opensource-heroes.com/assets/github-logo-86050eddecfe1cfbd92943ca8476fe54d44aec4f.png", "https://avatars.githubusercontent.com/u/119358?v=4", "https://opensource-heroes.com/assets/github-logo-black-29d80d5ce0a1d64ecfae1c5dd8335626d9b2261d.png", "https://opensource-heroes.com/assets/twitter-white-cf5ddeb01b2262190089c8374ee82611d8a83bc9.svg", "https://opensource-heroes.com/assets/linked-in-white-a36c0ee665eb9de62fb54380c7d490db2b46b07a.svg", "https://opensource-heroes.com/assets/facebook-white-8e3443638e6d181b5a3bb58ccbd929f600f68e19.png", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg", "https://opensource-heroes.com/assets/pixel-star-30f09ab325dab10b6bff4f56b1aaa35c77421cbc.svg", "https://opensource-heroes.com/assets/pixel-star-white-6ca0f53bbabea5f454c1fd7a5e173a10ca0c1950.svg" ]
[]
[]
[ "" ]
null
[]
null
Open Source Developer @timsutton Ranked as #5,139 in the World by GitHub Stars
en
Open Source Heroes
null
Love Open Source and this site? Check out how you can help us
8585
dbpedia
2
13
https://carpentries-incubator.github.io/python_packaging/instructor/03-building-and-installing.html
en
Python Packaging: Building and Installing Packages using setuptools
https://carpentries-incu…avicon-32x32.png
https://carpentries-incu…avicon-32x32.png
[ "https://carpentries-incubator.github.io/python_packaging/assets/images/incubator-logo.svg", "https://carpentries-incubator.github.io/python_packaging/assets/images/incubator-logo-sm.svg" ]
[]
[]
[ "" ]
null
[]
2023-07-04T00:00:00
en
../apple-touch-icon.png
null
Building and Installing Packages using setuptools Last updated on 2024-04-16 | Edit this page Estimated time: 20 minutes Introduction In the first lesson, we showed how to use the PYTHONPATH environment variable to enable us to import our modules and packages from anywhere on our system. There are a few disadvantages to this method: If we have two different versions of a package on our system at once, it can be tedious to manually update PYTHONPATH whenever we want to switch between them. If we have multiple Python environments on our system (using tools such as venv or conda), setting PYTHONPATH will affect all of them. This can lead to unexpected dependency conflicts that can be very hard to debug. If we share our software with others and require them to update their own PYTHONPATH, they will need to install any requirements for our package separately, which can be error prone. It would be preferable if we could install our package using pip, the same way that we would normally install external Python packages. However, if we enter the top level directory of our project and try the following: BASH $ cd /path/to/my/workspace/epi_models $ python3 -m pip install . We get the following error: OUTPUT ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. In order to make our project installable, we need to add the either the file pyproject.toml or setup.py to our project. For modern Python projects, it is recommended to write only pyproject.toml. This was introduced by PEP 517, PEP 518 and PEP 621 as a standard way to define a Python project, and all tools that build, install, and publish Python packages are expected to use it. What issetup.py? setup.py serves a similar role to pyproject.toml, but it is no longer recommended for use. The lesson on the history of build tools explains how it works and why the community has moved away from it. By making our project pip-installable, we’ll also make it very easy to publish our packages on public repositories – this will be covered in our lesson on package publishing. After publishing our work, our users will be able to download and install our package using pip from any machine of their chocie! To begin, we’ll introduce the concept of a ‘Python environment’, and how these can help us manage our workflows. Managing Python Environments When working with Python, it can sometimes be beneficial to install packages to an isolated environment instead of installing them globally. Usually, this is done to manage competing dependencies: Project B might depend upon Project A, but may have been written to use version 1.0. Project C might also depend upon Project A, but may instead only work with version 2.0. If we install Project A globally and choose version 2.0, then Project B will not work. Similarly, if we choose version 1.0, Project C will not work. A good way to handle these sorts of conflicts is to instead use virtual environments for each project. A number of tools have been developed to manage virtual environments, such as venv, which is a standard built-in Python tool, and conda, which is a powerful third-party tool. We’ll focus on venv here, but both tools work similarly. Callout You can pip install packages into a conda virtual environment, so much of the advice in this lesson will still apply if you prefer to use conda. If we’re using Linux, we can find which Python environment we’re using by calling: BASH $ which python3 If we’re using the default system environment, the result is something like the following: OUTPUT /usr/bin/python3 To create a new virtual environment using venv, we can call: BASH $ python3 -m venv /path/to/my/env This will create a new directory at the location /path/to/my/env. Note that this can be a relative path, so just calling python3 -m venv myenv will create the virtual environment in the directory ./myenv. We can then ‘activate’ the virtual environment using: BASH $ source /path/to/my/env/bin/activate Checking which Python we’re running should now give a different result: BASH $ which python3 OUTPUT /path/to/my/env/bin/python3 If we now install a new package, it will be installed within our new virtual environment instead of being installed to the system libraries. For example: BASH $ python3 -m pip install numpy We should now find NumPy installed at the following location (note that the Python version may not match yours): BASH $ ls /path/to/my/env/lib/python3.8/site-packages/numpy site-packages is a standard location to store installed Python packages. We can see this by analysing Python’s import path: PYTHON >>> import sys >>> print(sys.path) ['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/path/to/my/env/lib/python3.8/site-packages'] If we no longer wish to use this virtual environment, we can return to the system environment by calling: BASH $ deactivate Virtual environments are very useful when we’re testing our code, as they allow us to create a fresh Python environment without any of the installed packages we normally use in our work. This will be important later when we add dependencies to our package, as this allows us to test whether our users will be able to install and run our code properly using a fresh environment. An Overview of TOML files pyproject.toml is a TOML file, which stands for ‘Tom’s Obvious Minimal Langauge’ (named for its developer, Thomas Preston-Werner, who cofounded GitHub). There are many configuration file formats in common usage, such as YAML, JSON, and INI, but the Python community chose TOML as it provides some benefits over the competition: Designed to be human writable and human readable. Can map unambiguously to a hash table (a dict in Python). It has a formal specification, so has an unambiguous set of rules. A TOML file contains a series of key = value pairs, which may be grouped into sections using a header enclosed in square brackets, such as [section name]. The values are typed, unlike some other formats where all values are strings. The available types are strings, integers, floats, booleans, and dates. It is possible to store lists of values in arrays, or store a series of key-value pairs in tables. For example: TOML # file: mytoml.toml int_val = 5 float_val = 0.5 string_val = "hello world" bool_val = true date_val = 2023-01-01T08:00:00 array = [1, 2, 3] inline_table = {key = "value"} # Section headings allow us to define tables over # multiple lines [header_table] name = "John" dob = 2002-03-05 # We can define subtables using dot notation [header_table.subtable] foo = "bar" We can read this using the toml library in Python: BASH $ python3 -m pip install toml PYTHON >>> import toml >>> with open("mytoml.toml", "r") as f: ... data = toml.load(f) >>> print(data) The result is a dictionary object, with TOML types converted to their corresponding Python types: { 'int_val': 5, 'float_val': 0.5, 'string_val': 'hello world', 'bool_val': True, 'date_val': datetime.datetime(2023, 1, 1, 8, 0), 'array': [1, 2, 3], 'inline_table': {'key': 'value'}, 'header_table': { 'name': 'John', 'dob': datetime.date(2002, 3, 5), 'subtable': { 'foo': 'bar' } } } Callout Since Python 3.11, tomllib is part of Python’s standard library. It works the same as above, but you’ll need to import tomllib instead of toml. Installing our package with pyproject.toml First, we will show how to write a relatively minimal pyproject.toml file so that we can install our projects using pip. We will then cover some additional tricks that can be achieved with this file: Use alternative directory structures Include any data files needed by our code Generate an executable so that our scripts can be run directly from the command line Configure our development tools. To make our package pip-installable, we should add the file pyproject.toml to the top-level epi_models directory: 📁 epi_models | |____📜 pyproject.toml |____📦 epi_models | |____📜 __init__.py |____📜 __main__.py | |____📁 models | | | |____📜 __init__.py | |____📜 SIR.py | |____📜 SEIR.py | |____📜 SIS.py | |____📜 utils.py | |____📁 plotting | |____📜 __init__.py |____📜 plot_SIR.py |____📜 plot_SEIR.py |____📜 plot_SIS.py The first section in our pyproject.toml file should specify which build system we wish to use, and additionally specify any version requirements for packages used to build our code. This is necessary to avoid a circular dependecy problem that occurred with earlier Python build systems, in which the user had to run an install program to determine the project’s dependencies, but needed to already have the correct build tool installed to run the install program – see the lesson on historical build tools for more detail. We will choose to use setuptools, which requires the following: requires is set to a list of strings, each of which names a dependency of the build system and (optionally) its minimum version. This uses the same version syntax as pip. build-backend is set to a sub-module of setuptools which implements the PEP 517 build interface. With our build system determined, we can add some metadata that defines our project. At a minimum, we should specify the name of the package, its version, and our dependencies: That’s all we need! We’ll discuss versioning in our lesson on publishing. With this done, we can install our package using: BASH $ python3 -m pip install . This will automatically download and install our dependencies, and our package will be importable regardless of which directory we’re in. The installed package can be found in the directory /path/to/my/env/lib/python3.8/site-packages/epi_models along with a new directory, epi_models-0.1.0.dist-info, which simply contains metadata describing our project. If we look inside our installed package, we’ll see that our files have been copied, and there is also a __pycache__ directory: BASH $ ls /path/to/my/env/lib/python3.8/site-packages/epi_models __init__.py __main__.py models plotting __pycache__ The __pycache__ directory contains Python bytecode, which is a lower-level version of Python that is understood by the Python Virtual Machine (PVM). All of our Python code is converted to bytecode when it is run or imported, and by pre-compiling our package it can be imported much faster. If we look into the directories models and plotting, we’ll see those have been compiled to bytecode too. If we wish to uninstall, we may call: BASH $ python3 -m pip uninstall epi_models We can also create an ‘editable install’, in which any changes we make to our code are instantly recognised by any codes importing it – this mode can be very useful when developing our code, especially when working on documentation or tests. BASH $ python3 -m pip install -e . $ # Or... $ python3 -m pip install --editable . Callout The ability to create editable installs from a pyproject.toml-only build was standardised in PEP 660, and only recently implemented in pip. You may need to upgrade to use this feature: BASH $ python3 -m pip install --upgrade pip There are many other options we can add to our pyproject.toml to better describe our project. PEP 621 defines a minimum list of possible metadata that all build tools should support, so we’ll stick to that list. Each build tool will also define synonyms for some metadata entries, and additional tool-specific metadata. Some of the recommended core metadata keys are described below: TOML # file: pyproject.toml [project] # name: String, REQUIRED name = "my_project" # version: String, REQUIRED # Should follow PEP 440 rules # Can be provided dynamically, see the lesson on publishing version = "1.2.3" # description: String # A simple summary of the project description = "My wonderful Python package" # readme: String # Full description of the project. # Should be the path to your README file, relative to pyproject.toml readme = "README.md" # requires-python: String # The Python version required by the project requires-python = ">=3.8" # license: Table # The license of your project. # Can be provided as a file or a text description. # Discussed in the lesson on publishing license = {file = "LICENSE.md"} # or... license = {text = "BDS 3-Clause License"} # authors: Array of Tables # Can also be called 'maintainers'. # Each entry can have a name and/or an email authors = [ {name = "My Name", email = "my.email@email.net"}, {name = "My Friend", email = "their.email@email.net"}, ] # urls: Table # Should describe where to find useful info for your project urls = {source = "github.com/MyProfile/my_project", documentation = "my_project.readthedocs.io/en/latest"} # dependencies: Array of Strings # A list of requirements for our package dependencies = [ "numpy >= 1.20", "pyyaml", ] Note that some of the longer tables in our TOML file can be written using non-inline tables if it improved readability: TOML [project.urls] Source = "github.com/MyProfile/my_project", Documentation = "my_project.readthedocs.io/en/latest", Alternative Directory Structures setuptools provides some additional tools to help us install our package if they use a different layout to the ‘flat’ layout we covered so far. A popular alternative layout is the src-layout: 📁 epi_models | |____📜 pyproject.toml |____📁 src | |____📦 epi_models | |____📜 __init__.py |____📜 __main__.py |____📁 models |____📁 plotting The main benefit of this choice is that setuptools won’t accidentally bundle any utility modules stored in the top-level directory with our package. It can also be neater when one project contains multiple packages. Note that directories and files with special names are excluded by default regardless of which layout we choose, such as test/, docs/, and setup.py. We can also disable automatic package discovery and explicitly list the packages we wish to install: TOML # file: pyproject.toml [tool.setuptools] packages = ["my_package", "my_other_package"] Note that this is not part of the PEP 621 standard, and therefore instead of being listed under the [project] header, it is a method specific to setuptools. Finally, we may set up custom package discovery: TOML # file: pyproject.toml [tool.setuptools.packages.find] where = ["my_directory"] include = ["my_package", "my_other_package"] exclude = ["my_package.tests*"] However, for ease of use, it is recommended to stick to either the flat layout or the src layout. Package Data Sometimes our code requires some non-.py files in order to function properly, but these would not be picked up by automatic package discovery. For example, the project may store default input data in .json files. These could be included with your package by adding the following to pyproject.toml: TOML # file: pyproject.toml [tool.setuptools.package-data] epi_models = ["*.json"] Note that this would grab only .json files in the top-level directory of our project. To include data files from all packages and sub-packages, we should instead write: TOML # file: pyproject.toml [tool.setuptools.package-data] "*" = ["*.json"] Installing Scripts If our package contains any scripts and/or a __main__.py file, we can run those from anywhere on our system after installation: BASH $ python3 -m epi_models $ python3 -m epi_models.plotting.plot_SIR With a little extra work, we can also install a simplified interface that doesn’t require python3 -m in front. This is how tools like pip can be invoked using two possible methods: BASH $ python3 -m pip # Invoke with python $ pip # Invoke via console-scripts entrypoint This can be achieved by adding a table scripts under the [project] header: TOML # file: pyproject.toml [project] scripts = {epi_models = "epi_models.__main__:main"} # Alternative form: [project.scripts] epi_models = "epi_models.__main__:main" This syntax means that we should create a console script epi_models, and that running it should call the function main() from the file epi_models/__main__.py. This will require a slight modification to our __main__.py file. All that’s necessary is to move everything from the script into a function main() that takes no arguments, and then to call main() at the bottom of the script: PYTHON # file: main.py def main(): # Put the __main__ script here... main() This will allow us to run our package as a script directly from the command line BASH $ python3 -m pip install . $ epi_models --help Note that we’ll still be able to run our code using the longer form: BASH $ python3 -m epi_models --help If we have multiple scripts in our package, these can all be given invidual console scripts. However, these will also need to have a function name as an entry point: TOML # file: pyproject.toml [project.scripts] epi_models = "epi_models.__main__:main" epi_models_sir = "epi_models.plotting.plot_SIR:main" So how do these scripts work? When we activate a virtual environment, a new entry is added to our PATH environment variable linking to /path/to/my/env/bin/: BASH PATH = "/path/to/my/env/bin:${PATH}" After installing our console scripts, we can find a new file in this directory with the name we assigned to it. For example, /path/to/my/env/bin/epi_models: PYTHON #!/path/to/my/env/bin/python3 # -*- coding: utf-8 -*- import re import sys from epi_models.__main__ import main if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) Installing our project has automatically created a new Python file that can be run as a command line script due to the hash-bang (#!) on the top line, and all it does it import our main function and run it. As it’s contained with the bin/ directory of our Python environment, it’s available for use as long we’re using that environment, but as soon as we call deactivate, it is removed from our PATH. Setting Dependency Versions Earlier, when setting dependencies in our pyproject.toml, we chose to specify a minimum requirement for numpy, but not for pyyaml: This indicates that pip should install any version of numpy greater than 1.20, but that any version of pyyaml will do. If our installed numpy version is less than 1.20, or if it isn’t installed at all, pip will upgrade to the latest version that’s compatible with the rest of our installed packages and our Python version. We’ll cover software versioning in more detail in the lesson on publishing, but now we’ll simply cover some ways to specify which software versions we need: TOML "numpy >= 1.20" # Must be at least 1.20 "numpy > 1.20" # Must be greater than 1.20 "numpy == 1.20" # Must be exactly 1.20 "numpy <= 1.20" # Must be 1.20 at most "numpy < 1.20" # Must be less than 1.20 "numpy == 1.*" # Must be any version 1 If we separate our clauses with commas, we can combine these requirements: TOML # At least 1.20, less than 1.22, and not the release 1.21.3 "numpy => 1.20, < 1.22, != 1.21.3" A useful shorthand is the ‘compatible release’ clause: TOML "numpy ~= 1.20" # Must be a release compatible with 1.20 This is equivalent to: TOML "numpy >= 1.20, == 1.*" That is, we require anything which is version 1, provided it’s greater than 1.20. This would include version 1.25, but exlude version 2.0. We’ll come back to this later when we discuss publishing. Optional Dependencies Sometimes we might have dependencies that only make sense for certain kind of user. For example, a developer of our library might need any libraries we use to run unit tests or build documentation, but an end user would not. These can be added as optional-dependencies: These dependencies can be installed by adding the name of each optional dependency group in square brackets after telling pip what we want to install: BASH $ pip install .[test] # Include testing dependencies $ pip install .[doc] # Include documentation dependencies $ pip install .[test,doc] # Include all dependencies
8585
dbpedia
1
11
https://github.com/lindegroup/autopkgr/issues/635
en
AutoPkgr does not like autopkg 2.0.1 RC1 · Issue #635 · lindegroup/autopkgr
https://opengraph.githubassets.com/918785335c8462f2d0f02fbf5df140b9eb61bb94228e854ecb2b2b3f4c3d2cbe/lindegroup/autopkgr/issues/635
https://opengraph.githubassets.com/918785335c8462f2d0f02fbf5df140b9eb61bb94228e854ecb2b2b3f4c3d2cbe/lindegroup/autopkgr/issues/635
[ "https://avatars.githubusercontent.com/u/1584888?s=80&u=44a53141ee0a5f3aa3c14126c3bba47d77c0b026&v=4", "https://avatars.githubusercontent.com/u/7801391?s=80&v=4", "https://user-images.githubusercontent.com/7801391/72787460-d0ce8880-3be4-11ea-8af0-e38a72a099fd.png", "https://avatars.githubusercontent.com/u/7801391?s=40&v=4", "https://avatars.githubusercontent.com/u/20520337?s=80&u=62769ab88165f83c6a06b3ab38f0da8146d9e69f&v=4", "https://avatars.githubusercontent.com/u/7801391?s=40&v=4", "https://avatars.githubusercontent.com/u/20520337?s=80&u=62769ab88165f83c6a06b3ab38f0da8146d9e69f&v=4", "https://avatars.githubusercontent.com/u/1584888?s=80&u=44a53141ee0a5f3aa3c14126c3bba47d77c0b026&v=4", "https://avatars.githubusercontent.com/u/1584888?s=80&u=44a53141ee0a5f3aa3c14126c3bba47d77c0b026&v=4", "https://avatars.githubusercontent.com/u/20520337?s=80&u=62769ab88165f83c6a06b3ab38f0da8146d9e69f&v=4", "https://avatars.githubusercontent.com/u/20520337?s=40&u=62769ab88165f83c6a06b3ab38f0da8146d9e69f&v=4", "https://avatars.githubusercontent.com/u/20520337?s=80&u=62769ab88165f83c6a06b3ab38f0da8146d9e69f&v=4", "https://avatars.githubusercontent.com/u/20520337?s=40&u=62769ab88165f83c6a06b3ab38f0da8146d9e69f&v=4", "https://avatars.githubusercontent.com/u/1584888?s=52&v=4", "https://avatars.githubusercontent.com/u/7801391?s=52&v=4", "https://avatars.githubusercontent.com/u/20520337?s=52&v=4" ]
[]
[]
[ "" ]
null
[]
null
When running a scheduled run of my list of recipes: - File &quot;/usr/local/bin/autopkg&quot;, line 126 log_err(f&quot;WARNING: plist error for {filename}: {err}&quot;) ^ SyntaxError: invalid syntax Apparently re-produced by @homebysix w...
en
https://github.com/fluidicon.png
GitHub
https://github.com/lindegroup/autopkgr/issues/635
8585
dbpedia
1
50
https://robbmann.io/posts/emacs-treesit-auto/
en
Getting Emacs 29 to Automatically Use Tree-sitter Modes
https://robbmann.io/favicon-32x32.png
https://robbmann.io/favicon-32x32.png
[ "https://robbmann.io/img/robb_python_grey_huf4e52b91f6345de53e62f8c2a64f08ae_471021_192x192_fill_box_smart1_3.png" ]
[]
[]
[ "" ]
null
[ "Robert Enzmann" ]
2023-01-22T00:00:00-05:00
It's Robb, man!
en
/apple-touch-icon.png
robbmann
https://robbmann.io/posts/emacs-treesit-auto/
Recently, /u/casouri posted a guide to getting started with the new built-in tree-sitter capabilities for Emacs 29. In that post, they mention that there will be no automatic major-mode fallback for Emacs 29. That means I would have to use M-x python-ts-mode manually, or change the entry in auto-mode-alist to use python-ts-mode, in order to take advantage of the new tree-sitter functionality. Of course, that would still leave the problem of when the Python tree-sitter grammar isn’t installed, in which case python-ts-mode is going to fail. To solve this issue, I wrote a very small package that adjusts the new major-mode-remap-alist variable based on what grammars are ready on your machine. If a language’s tree-sitter grammar is installed, it will use that mode. If not, it will use the original major mode. Simple as that! For the impatient: treesit-auto.el # The package I wound up with is available on GitHub and MELPA as treesit-auto.el. So long as MELPA is on your package-archives list like this: Then you can use M-x package-refresh-contents followed by M-x package-install RET treesit-auto. If you also like having a local copy of the git repository itself, then package-vc-install is a better fit: Then, in your configuration file: See the README on GitHub for all the goodies you can put in the :config block. Origins of treesit-auto.el # The recommendation in Yuan’s article was to use define-derived-mode along with treesit-ready-p. In the NEWS (C-h n), however, I noticed a new variable major-mode-remap-alist, which at a glance appears suitable for a similar cause. For my Emacs configuration, I had two things I wanted to accomplish: Set all of the URLs for treesit-language-source-alist up front, so that I need only use treesit-install-language-grammar RET python RET, instead of writing out everything interactively Use the same list of available grammars to remap between tree-sitter modes and their default fallbacks Initially, I tried Yuan’s suggested approach with define-derived-mode, but I didn’t want to repeat code for every major mode I wanted fallback for. Trying to expand the major mode names correctly in a loop wound up unwieldy, because expanding the names properly for the define-derived-mode macro was too challenging for my current skill level with Emacs lisp, and wound up cluttering the global namespace more than I liked when auto-completing through M-x. Instead, I decided take a two step approach: Set up treesit-language-source-alist with the grammars I’ll probably use Loop over the keys in this alist to define the association between a tree-sitter mode and its default fallback through major-mode-remap-alist This makes the code we need to actually write a little simpler, since an association like python-mode to python-ts-mode can be automatic (since they share a name), and we can use a customizable alist for specifying the edge cases, such as toml-ts-mode falling back to conf-toml-mode. To start with, I just had this: At this point, I can just use M-x treesit-install-language-grammar RET bash to get the Bash grammar, and similarly for other languages. Then, I made an alist of the “weird” cases: Setting the CDR to nil explicitly means I didn’t want any type of fallback to be attempted whatsoever for a given tree-sitter mode, even if something similarly named might be installed. Finally, I had a simple loop where I constructed the symbols for the mode and the tree-sitter mode via intern and concat, and check whether the tree-sitter version is available through treesit-ready-p. If it is, we remap the base mode to the tree-sitter one in major-mode-remap-alist. If it isn’t ready, then we do the opposite: remap the tree-sitter mode to the base version.
8585
dbpedia
0
84
https://en.wikipedia.org/wiki/APT_(software)
en
APT (software)
https://upload.wikimedia…ll_mediawiki.png
https://upload.wikimedia…ll_mediawiki.png
[ "https://en.wikipedia.org/static/images/icons/wikipedia.png", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg", "https://upload.wikimedia.org/wikipedia/commons/thumb/1/19/Apt-get_install_mediawiki.png/220px-Apt-get_install_mediawiki.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Synaptic_Package_Manager.png/300px-Synaptic_Package_Manager.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/28px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/6/66/Openlogo-debianV2.svg/80px-Openlogo-debianV2.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/NewTux.svg/13px-NewTux.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/16px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/16px-Symbol_category_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/12px-Commons-logo.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0b/Wikiversity_logo_2017.svg/16px-Wikiversity_logo_2017.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/d/db/Symbol_list_class.svg/16px-Symbol_list_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/16px-Symbol_category_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/12px-Commons-logo.svg.png", "https://login.wikimedia.org/wiki/Special:CentralAutoLogin/start?type=1x1", "https://en.wikipedia.org/static/images/footer/wikimedia-button.svg", "https://en.wikipedia.org/static/images/footer/poweredby_mediawiki.svg" ]
[]
[]
[ "" ]
null
[ "Contributors to Wikimedia projects" ]
2003-02-23T21:24:49+00:00
en
/static/apple-touch/wikipedia.png
https://en.wikipedia.org/wiki/APT_(software)
Free software package management system For other uses, see APT (disambiguation) § Computing and software. Advanced Package ToolDeveloper(s)The Debian ProjectInitial release31 March 1998; 26 years ago ( )[1]Stable release 2.9.7[2] / 30 July 2024; 19 days ago ( ) RepositoryWritten inC++,[3] shell script,[3] Extensible Markup Language,[3] CMake,[3] C,[3] Perl[3] Operating systemUnix-likeTypePackage managerLicenseGPLv2+Websitewiki .debian .org /AptCLI Advanced package tool, or APT, is a free-software user interface that works with core libraries to handle the installation and removal of software on Debian and Debian-based Linux distributions.[4] APT simplifies the process of managing software on Unix-like computer systems by automating the retrieval, configuration and installation of software packages, either from precompiled files or by compiling source code.[4] Usage [edit] APT is a collection of tools distributed in a package named apt. A significant part of APT is defined in a C++ library of functions; APT also includes command-line programs for dealing with packages, which use the library. Three such programs are apt, apt-get and apt-cache. They are commonly used in examples because they are simple and ubiquitous. The apt package is of "important" priority in all current Debian releases, and is therefore included in a default Debian installation. APT can be considered a front end to dpkg, friendlier than the older dselect front end. While dpkg performs actions on individual packages, APT manages relations (especially dependencies) between them, as well as sourcing and management of higher-level versioning decisions (release tracking and version pinning). APT is often hailed as one of Debian's best features,[5][6][7][8] which Debian developers attribute to the strict quality controls in Debian's policy.[9][10] A major feature of APT is the way it calls dpkg — it does topological sorting of the list of packages to be installed or removed and calls dpkg in the best possible sequence. In some cases, it utilizes the --force options of dpkg. However, it only does this when it is unable to calculate how to avoid the reason dpkg requires the action to be forced. Installing software [edit] The user indicates one or more packages to be installed. Each package name is phrased as just the name portion of the package, not a fully qualified filename (for instance, in a Debian system, libc6 would be the argument provided, not libc6_1.9.6-2.deb). Notably, APT automatically gets and installs packages upon which the indicated package depends (if necessary). This was an original distinguishing characteristic of APT-based package management systems, as it avoided installation failure due to missing dependencies, a type of dependency hell. Another distinction is the retrieval of packages from remote repositories. APT uses a location configuration file (/etc/apt/sources.list) to locate the desired packages, which might be available on the network or a removable storage medium, for example, and retrieve them, and also obtain information about available (but not installed) packages. APT provides other command options to override decisions made by apt-get's conflict resolution system. One option is to force a particular version of a package. This can downgrade a package and render dependent software inoperable, so the user must be careful. Finally, the apt_preferences mechanism allows the user to create an alternative installation policy for individual packages. The user can specify packages using a POSIX regular expression. APT searches its cached list of packages and lists the dependencies that must be installed or updated. APT retrieves, configures and installs the dependencies automatically. Triggers are the treatment of deferred actions. Usage modes of apt and apt-get that facilitate updating installed packages include: update is used to resynchronize the package index files from their sources. The lists of available packages are fetched from the location(s) specified in /etc/apt/sources.list. For example, when using a Debian archive, this command retrieves and scans the Packages.gz files, so that information about new and updated packages is available. upgrade is used to install the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list. Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version. full-upgrade (apt) and dist-upgrade (apt-get), in addition to performing the function of upgrade, also intelligently handles changing dependencies with new versions of packages; apt and apt-get have a "smart" conflict resolution system, and will attempt to upgrade the most important packages at the expense of less important ones if necessary. The /etc/apt/sources.list file contains a list of locations from which to retrieve desired package files.[4] aptitude has a smarter dist-upgrade feature called full-upgrade.[11] Configuration and files [edit] /etc/apt contains the APT configuration folders and files. apt-config is the APT Configuration Query program.[12] apt-config dump shows the configuration.[13] Files [edit] /etc/apt/sources.list:[14] Locations to fetch packages from. /etc/apt/sources.list.d/: Additional source list fragments. /etc/apt/apt.conf: APT configuration file. /etc/apt/apt.conf.d/: APT configuration file fragments. /etc/apt/preferences.d/: Directory with version preferences files. This is where "pinning" is specified, i.e. a preference to get certain packages from a separate source or from a different version of a distribution. /var/cache/apt/archives/: Storage area for retrieved package files. /var/cache/apt/archives/partial/: Storage area for package files in transit. /var/lib/apt/lists/: Storage area for state information for each package resource specified in sources.list /var/lib/apt/lists/partial/: Storage area for state information in transit. Sources [edit] APT relies on the concept of repositories in order to find software and resolve dependencies. For APT, a repository is a directory containing packages along with an index file. This can be specified as a networked or CD-ROM location. As of 14 August 2021, the Debian project keeps a central repository of over 50,000 software packages ready for download and installation.[15] Any number of additional repositories can be added to APT's sources.list configuration file (/etc/apt/sources.list) and then be queried by APT. Graphical front ends often allow modifying sources.list more simply (apt-setup). Once a package repository has been specified (like during the system installation), packages in that repository can be installed without specifying a source and will be kept up-to-date automatically. In addition to network repositories, compact discs and other storage media (USB keydrive, hard disks...) can be used as well, using apt-cdrom[16] or adding file:/ URI[17] to the source list file. apt-cdrom can specify a folder other than a CD-ROM, using the -d option (i.e. a hard disk or a USB keydrive). The Debian CDs available for download contain Debian repositories. This allows non-networked machines to be upgraded. One can also use apt-zip. Problems may appear when several sources offer the same package(s). Systems that have such possibly conflicting sources can use APT pinning to control which sources should be preferred. APT pinning [edit] The APT pinning feature allows users to force APT to choose particular versions of packages which may be available in different versions from different repositories. This allows administrators to ensure that packages are not upgraded to versions which may conflict with other packages on the system, or that have not been sufficiently tested for unwelcome changes. In order to do this, the pins in APT's preferences file (/etc/apt/preferences) must be modified,[18] although graphical front ends often make pinning simpler. Front ends [edit] Several other front ends to APT exist, which provide more advanced installation functions and more intuitive interfaces. These include: Synaptic, a GTK graphical user interface Ubuntu Software Center, a GTK graphical user interface developed by the Ubuntu project aptitude, a console client with CLI and ncurses-based TUI interfaces KPackage, part of KDE Adept package manager, a graphical user interface for KDE (deb, rpm, bsd) PackageKit, a D-Bus frontend, maintained by freedesktop.org, powers GNOME Software and KDE Discover. GDebi, a GTK-based tool sponsored for Ubuntu. (There is also a Qt version, available in the Ubuntu repositories as gdebi-kde.) apt-cdrom, a way to add a new CDROM to APT's list of available repositories (sources.lists). It is necessary to use apt-cdrom to add CDs to the APT system, it cannot be done by hand. apt-zip, a way to use apt with removable media, specifically USB flash drives. aptURL, an Ubuntu software package that enables end-user applications to install with a single-click through a browser.[19][20] Cydia, a package manager for jailbroken iOS based on APT (ported to iOS as part of the Telesphoreo project).[21][22] Sileo, like Cydia, a package manager for jailbroken iOS based on newer versions of APT (ported to iOS by the Electra team) gnome-apt, a GTK/GNOME-widget-based graphical front end. Developed by Havoc Pennington[23] Muon discover (previous Muon software center), a Qt-based graphical user interface Hildon application manager (Maemo application), a Maemo front end apticron, a service designed to be run via cron to email notices of pending updates to a system administrator (sysadmin). APT Daemon, a front end that runs as a service to allow users to install software through PolicyKit and is in turn the framework used by Ubuntu software center (along with the Linux Mint software manager). Package installer, part of MX Linux. Apt-offline: A convenient way to make any available non-containerized change to any Debian-type Linux installation without using a direct Internet connection. However, a temporary direct connection can be required, such as to install Apt-offline on some of the relevant types of Linux, and to add PPA's to the sources-list. APT front ends can: search for new packages; upgrade packages; install or remove packages and upgrade the whole system to a new release. APT front ends can list the dependencies of packages being installed or upgraded, ask the administrator if packages recommended or suggested by newly installed packages should be installed too, automatically install dependencies and perform other operations on the system such as removing obsolete files and packages. History [edit] The original effort that led to the apt-get program was the dselect replacement project known by its codename Deity.[24] This project was commissioned in 1997 by Brian White, the Debian release manager at the time. The first functional version of apt-get was called dpkg-get and was only intended to be a test program for the core library functions that would underpin the new user interface (UI).[25] Much of the original development of APT was done on Internet relay chat (IRC), so records have been lost. The 'Deity creation team' mailing list archives include only the major highlights. The 'Deity' name was abandoned as the official name for the project due to concerns over the religious nature of the name. The APT name was eventually decided after considerable internal and public discussion. Ultimately the name was proposed on IRC, accepted and then finalized on the mailing lists.[26] APT was introduced in 1998 and original test builds were circulated on IRC. The first Debian version that included it was Debian 2.1, released on 9 March 1999.[27] In the end the original goal of the Deity project of replacing the dselect user interface was a failure. Work on the user interface portion of the project was abandoned (the user interface directories were removed from the concurrent versions system) after the first public release of apt-get. The response to APT as a dselect method and a command line utility was so great and positive that all development efforts focused on maintaining and improving the tool. It was not until much later that several independent people built user interfaces on top of libapt-pkg. Eventually, a new team picked up the project, began to build new features and released version 0.6 of APT which introduced the Secure APT feature, using strong cryptographic signing to authenticate the package repositories.[28] Variants [edit] APT was originally designed as a front end for dpkg to work with Debian's .deb packages. A version of APT modified to also work with the RPM Package Manager system was released as APT-RPM.[29] The Fink project has ported APT to Mac OS X for some of its own package management tasks,[30] and APT is also available in OpenSolaris. apt-file [edit] apt-file is a command, packaged separately from APT, to find which package includes a specific file, or to list all files included in a package on remote repositories.[31] See also [edit] Free and open-source software portal Alien AppStream APTonCD GNU Guix Wajig List of software package management systems References [edit]
8585
dbpedia
1
27
https://www.jetbrains.com/help/pycharm/creating-and-optimizing-imports.html
en
Auto import | PyCharm
https://resources.jetbra…meta/preview.png
https://resources.jetbra…meta/preview.png
[ "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.quickfixBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.more.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.codeInsight.intentionBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_optimize_imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.general.settings.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/py_optimize_imports_before_commit.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_reformat-file-dialog.png", "https://resources.jetbrains.com/help/img/idea/2024.2/python_import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/python_import1.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_style1.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_style2.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_inspection.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_convert_imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_fix_relative.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_relative_absolute_imports_intention.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_import_auto_completion.png", "https://resources.jetbrains.com/help/img/idea/2024.2/ws_es6_auto-import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/ws_es6_auto-import.png", "https://resources.jetbrains.com/help/img/idea/2024.2/py_ignore_missing_import.png" ]
[]
[]
[ "" ]
null
[]
null
Basic procedures to create and optimize imports in PyCharm. Learn more how to import the missing import or XML namespace.
en
https://jetbrains.com/ap…e-touch-icon.png
PyCharm Help
https://www.jetbrains.com/help/pycharm/creating-and-optimizing-imports.html
Auto import When you reference a class that has not been imported, PyCharm helps you locate this file and add it to the list of imports. You can import a single class or an entire package, depending on your settings. The import statement is added to the imports section, but the caret does not move from the current position, and your current editing session does not suspend. This feature is known as the Import Assistant. Using Import Assistant is the preferred way to handle imports in PyCharm because import optimizations are not supported via command line. The same possibility applies to XML files. When you type a tag with an unbound namespace, the import assistant suggests creating a namespace and offers a list of appropriate choices. Automatically add import statements You can configure the IDE to automatically add import statements if there are no options to choose from. Press Ctrl+Alt+S to open settings and then select Editor | General | Auto Import. In the Python section, configure automatic imports: Select Show import popup to automatically display an import popup when tying the name of a class that lacks an import statement. Select one of the Preferred import style options to define the way an import statement to be generated. When tooltips are disabled, unresolved references are underlined and marked with the red bulb icon . To view the list of suggestions, click this icon (or press Alt+Enter) and select Import class. Disable all tooltips Hover over the inspection widget in the top-right corner of the editor, click , and disable the Show Auto-Import Tooltip option. Disable auto import If you want to completely disable auto-import, make sure that: All import tooltips are disabled. The automatic insertion of import statements is disabled. Optimize imports The Optimize Imports feature helps you remove unused imports and organize import statements in the current file or in all files in a directory at once according to the rules specified in Settings | Editor | Code Style | <language> | Imports. Optimize all imports Select a file or a directory in the Project tool window (View | Tool Windows | Project). Do any of the following: In the main menu, go to Code | Optimize Imports (or press Ctrl+Alt+O). From the context menu, select Optimize Imports. (If you've selected a directory) Choose whether you want to optimize imports in all files in the directory, or only in locally modified files (if your project is under version control), and click Run. Optimize imports in a single file Place the caret at the import statement and press Alt+Enter or use the icon. Select Optimize imports. Optimize imports when committing changes to Git If your project is under version control, you can instruct PyCharm to optimize imports in modified files before committing them to VCS. Press Ctrl+K or select Git | Commit from the main menu. Click and in the commit message area, select the Optimize imports checkbox. Automatically optimize imports on save You can configure the IDE to optimize imports in modified files automatically when your changes are saved. Press Ctrl+Alt+S to open settings and then select Tools | Actions on Save. Enable the Optimize imports option. Additionally, from the All file types list, select the types of files in which you want to optimize imports. Apply the changes and close the dialog. Optimize imports when reformatting a file You can tell PyCharm to optimize imports in a file every time it is reformatted. Open the file in the editor, press Ctrl+Alt+Shift+L, and make sure the Optimize imports checkbox is selected in the Reformat File dialog that opens. After that every time you press Ctrl+Alt+L in this project, PyCharm will optimize its imports automatically. Creating imports on the fly Import packages on-the-fly Start typing a name in the editor. If the name references a class that has not been imported, the following prompt appears: The unresolved references will be underlined, and you will have to invoke intention action Add import explicitly. Press Alt+Enter. If there are multiple choices, select the desired import from the list. You can define your preferred import style for Python code by using the following options available on the Auto Import page of the project settings (Settings | Editor | General | Auto Import): from <module> import <name> import <module>.<name> Toggling relative and absolute imports PyCharm helps you organize relative and absolute imports within a source root. With the specific intention, you can convert absolute imports into relative and relative imports into absolute. If your code contains any relative import statement, PyCharm will add relative imports when fixing the missing imports. Note that relative imports work only within the current source root: you cannot relatively import a package from another source root. The intentions prompting you to convert imports are enabled by default. To disable them, open project Settings (Ctrl+Alt+S), select Editor | Intentions, and deselect the Convert absolute import to relative and Convert relative import to absolute. When you complete a ES6 symbol or a CommonJS module, PyCharm either decides on the style of the import statement itself or displays a popup where you can choose the style you need. Learn more from Auto-import in JavaScript. Last modified: 28 June 2024
8585
dbpedia
2
25
https://docs.python.org/3/howto/logging-cookbook.html
en
Logging Cookbook
https://docs.python.org/…tic/og-image.png
https://docs.python.org/…tic/og-image.png
[ "https://docs.python.org/3/_static/py.svg", "https://docs.python.org/3/_static/py.svg", "https://docs.python.org/3/_static/py.svg" ]
[]
[]
[ "" ]
null
[]
null
Author, Vinay Sajip <vinay_sajip at red-dove dot com>,. This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference info...
en
../_static/py.svg
Python documentation
https://docs.python.org/3/howto/logging-cookbook.html
Logging Cookbook¶ Author: Vinay Sajip <vinay_sajip at red-dove dot com> This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference information, please see Other resources. Using logging in multiple modules¶ Multiple calls to logging.getLogger('someLogger') return a reference to the same logger object. This is true not only within the same module, but also across modules as long as it is in the same Python interpreter process. It is true for references to the same object; additionally, application code can define and configure a parent logger in one module and create (but not configure) a child logger in a separate module, and all logger calls to the child will pass up to the parent. Here is a main module: import logging import auxiliary_module # create logger with 'spam_application' logger = logging.getLogger('spam_application') logger.setLevel(logging.DEBUG) # create file handler which logs even debug messages fh = logging.FileHandler('spam.log') fh.setLevel(logging.DEBUG) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(logging.ERROR) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) logger.info('creating an instance of auxiliary_module.Auxiliary') a = auxiliary_module.Auxiliary() logger.info('created an instance of auxiliary_module.Auxiliary') logger.info('calling auxiliary_module.Auxiliary.do_something') a.do_something() logger.info('finished auxiliary_module.Auxiliary.do_something') logger.info('calling auxiliary_module.some_function()') auxiliary_module.some_function() logger.info('done with auxiliary_module.some_function()') Here is the auxiliary module: import logging # create logger module_logger = logging.getLogger('spam_application.auxiliary') class Auxiliary: def __init__(self): self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary') self.logger.info('creating an instance of Auxiliary') def do_something(self): self.logger.info('doing something') a = 1 + 1 self.logger.info('done doing something') def some_function(): module_logger.info('received a call to "some_function"') The output looks like this: 2005-03-23 23:47:11,663 - spam_application - INFO - creating an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO - creating an instance of Auxiliary 2005-03-23 23:47:11,665 - spam_application - INFO - created an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,668 - spam_application - INFO - calling auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO - doing something 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO - done doing something 2005-03-23 23:47:11,670 - spam_application - INFO - finished auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,671 - spam_application - INFO - calling auxiliary_module.some_function() 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO - received a call to 'some_function' 2005-03-23 23:47:11,673 - spam_application - INFO - done with auxiliary_module.some_function() Logging from multiple threads¶ Logging from multiple threads requires no special effort. The following example shows logging from the main (initial) thread and another thread: import logging import threading import time def worker(arg): while not arg['stop']: logging.debug('Hi from myfunc') time.sleep(0.5) def main(): logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d%(threadName)s%(message)s') info = {'stop': False} thread = threading.Thread(target=worker, args=(info,)) thread.start() while True: try: logging.debug('Hello from main') time.sleep(0.75) except KeyboardInterrupt: info['stop'] = True break thread.join() if __name__ == '__main__': main() When run, the script should print something like the following: 0 Thread-1 Hi from myfunc 3 MainThread Hello from main 505 Thread-1 Hi from myfunc 755 MainThread Hello from main 1007 Thread-1 Hi from myfunc 1507 MainThread Hello from main 1508 Thread-1 Hi from myfunc 2010 Thread-1 Hi from myfunc 2258 MainThread Hello from main 2512 Thread-1 Hi from myfunc 3009 MainThread Hello from main 3013 Thread-1 Hi from myfunc 3515 Thread-1 Hi from myfunc 3761 MainThread Hello from main 4017 Thread-1 Hi from myfunc 4513 MainThread Hello from main 4518 Thread-1 Hi from myfunc This shows the logging output interspersed as one might expect. This approach works for more threads than shown here, of course. Multiple handlers and formatters¶ Loggers are plain Python objects. The addHandler() method has no minimum or maximum quota for the number of handlers you may add. Sometimes it will be beneficial for an application to log all messages of all severities to a text file while simultaneously logging errors or above to the console. To set this up, simply configure the appropriate handlers. The logging calls in the application code will remain unchanged. Here is a slight modification to the previous simple module-based configuration example: import logging logger = logging.getLogger('simple_example') logger.setLevel(logging.DEBUG) # create file handler which logs even debug messages fh = logging.FileHandler('spam.log') fh.setLevel(logging.DEBUG) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(logging.ERROR) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') ch.setFormatter(formatter) fh.setFormatter(formatter) # add the handlers to logger logger.addHandler(ch) logger.addHandler(fh) # 'application' code logger.debug('debug message') logger.info('info message') logger.warning('warn message') logger.error('error message') logger.critical('critical message') Notice that the ‘application’ code does not care about multiple handlers. All that changed was the addition and configuration of a new handler named fh. The ability to create new handlers with higher- or lower-severity filters can be very helpful when writing and testing an application. Instead of using many print statements for debugging, use logger.debug: Unlike the print statements, which you will have to delete or comment out later, the logger.debug statements can remain intact in the source code and remain dormant until you need them again. At that time, the only change that needs to happen is to modify the severity level of the logger and/or handler to debug. Logging to multiple destinations¶ Let’s say you want to log to console and file with different message formats and in differing circumstances. Say you want to log messages with levels of DEBUG and higher to file, and those messages at level INFO and higher to the console. Let’s also assume that the file should contain timestamps, but the console messages should not. Here’s how you can achieve this: import logging # set up logging to file - see previous section for more details logging.basicConfig(level=logging.DEBUG, format='%(asctime)s%(name)-12s%(levelname)-8s%(message)s', datefmt='%m-%d %H:%M', filename='/tmp/myapp.log', filemode='w') # define a Handler which writes INFO messages or higher to the sys.stderr console = logging.StreamHandler() console.setLevel(logging.INFO) # set a format which is simpler for console use formatter = logging.Formatter('%(name)-12s: %(levelname)-8s%(message)s') # tell the handler to use this format console.setFormatter(formatter) # add the handler to the root logger logging.getLogger('').addHandler(console) # Now, we can log to the root logger, or any other logger. First the root... logging.info('Jackdaws love my big sphinx of quartz.') # Now, define a couple of other loggers which might represent areas in your # application: logger1 = logging.getLogger('myapp.area1') logger2 = logging.getLogger('myapp.area2') logger1.debug('Quick zephyrs blow, vexing daft Jim.') logger1.info('How quickly daft jumping zebras vex.') logger2.warning('Jail zesty vixen who grabbed pay from quack.') logger2.error('The five boxing wizards jump quickly.') When you run this, on the console you will see root : INFO Jackdaws love my big sphinx of quartz. myapp.area1 : INFO How quickly daft jumping zebras vex. myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack. myapp.area2 : ERROR The five boxing wizards jump quickly. and in the file you will see something like 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz. 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim. 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex. 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack. 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly. As you can see, the DEBUG message only shows up in the file. The other messages are sent to both destinations. This example uses console and file handlers, but you can use any number and combination of handlers you choose. Note that the above choice of log filename /tmp/myapp.log implies use of a standard location for temporary files on POSIX systems. On Windows, you may need to choose a different directory name for the log - just ensure that the directory exists and that you have the permissions to create and update files in it. Custom handling of levels¶ Sometimes, you might want to do something slightly different from the standard handling of levels in handlers, where all levels above a threshold get processed by a handler. To do this, you need to use filters. Let’s look at a scenario where you want to arrange things as follows: Send messages of severity INFO and WARNING to sys.stdout Send messages of severity ERROR and above to sys.stderr Send messages of severity DEBUG and above to file app.log Suppose you configure logging with the following JSON: { "version":1, "disable_existing_loggers":false, "formatters":{ "simple":{ "format":"%(levelname)-8s - %(message)s" } }, "handlers":{ "stdout":{ "class":"logging.StreamHandler", "level":"INFO", "formatter":"simple", "stream":"ext://sys.stdout" }, "stderr":{ "class":"logging.StreamHandler", "level":"ERROR", "formatter":"simple", "stream":"ext://sys.stderr" }, "file":{ "class":"logging.FileHandler", "formatter":"simple", "filename":"app.log", "mode":"w" } }, "root":{ "level":"DEBUG", "handlers":[ "stderr", "stdout", "file" ] } } This configuration does almost what we want, except that sys.stdout would show messages of severity ERROR and above as well as INFO and WARNING messages. To prevent this, we can set up a filter which excludes those messages and add it to the relevant handler. This can be configured by adding a filters section parallel to formatters and handlers: { "filters":{ "warnings_and_below":{ "()":"__main__.filter_maker", "level":"WARNING" } } } and changing the section on the stdout handler to add it: { "stdout":{ "class":"logging.StreamHandler", "level":"INFO", "formatter":"simple", "stream":"ext://sys.stdout", "filters":["warnings_and_below"] } } A filter is just a function, so we can define the filter_maker (a factory function) as follows: def filter_maker(level): level = getattr(logging, level) def filter(record): return record.levelno <= level return filter This converts the string argument passed in to a numeric level, and returns a function which only returns True if the level of the passed in record is at or below the specified level. Note that in this example I have defined the filter_maker in a test script main.py that I run from the command line, so its module will be __main__ - hence the __main__.filter_maker in the filter configuration. You will need to change that if you define it in a different module. With the filter added, we can run main.py, which in full is: import json import logging import logging.config CONFIG = ''' { "version": 1, "disable_existing_loggers": false, "formatters": { "simple": { "format": "%(levelname)-8s - %(message)s" } }, "filters": { "warnings_and_below": { "()" : "__main__.filter_maker", "level": "WARNING" } }, "handlers": { "stdout": { "class": "logging.StreamHandler", "level": "INFO", "formatter": "simple", "stream": "ext://sys.stdout", "filters": ["warnings_and_below"] }, "stderr": { "class": "logging.StreamHandler", "level": "ERROR", "formatter": "simple", "stream": "ext://sys.stderr" }, "file": { "class": "logging.FileHandler", "formatter": "simple", "filename": "app.log", "mode": "w" } }, "root": { "level": "DEBUG", "handlers": [ "stderr", "stdout", "file" ] } } ''' def filter_maker(level): level = getattr(logging, level) def filter(record): return record.levelno <= level return filter logging.config.dictConfig(json.loads(CONFIG)) logging.debug('A DEBUG message') logging.info('An INFO message') logging.warning('A WARNING message') logging.error('An ERROR message') logging.critical('A CRITICAL message') And after running it like this: python main.py2>stderr.log >stdout.log We can see the results are as expected: $ more *.log :::::::::::::: app.log :::::::::::::: DEBUG - A DEBUG message INFO - An INFO message WARNING - A WARNING message ERROR - An ERROR message CRITICAL - A CRITICAL message :::::::::::::: stderr.log :::::::::::::: ERROR - An ERROR message CRITICAL - A CRITICAL message :::::::::::::: stdout.log :::::::::::::: INFO - An INFO message WARNING - A WARNING message Configuration server example¶ Here is an example of a module using the logging configuration server: import logging import logging.config import time import os # read initial config file logging.config.fileConfig('logging.conf') # create and start listener on port 9999 t = logging.config.listen(9999) t.start() logger = logging.getLogger('simpleExample') try: # loop through logging calls to see the difference # new configurations make, until Ctrl+C is pressed while True: logger.debug('debug message') logger.info('info message') logger.warning('warn message') logger.error('error message') logger.critical('critical message') time.sleep(5) except KeyboardInterrupt: # cleanup logging.config.stopListening() t.join() And here is a script that takes a filename and sends that file to the server, properly preceded with the binary-encoded length, as the new logging configuration: #!/usr/bin/env python import socket, sys, struct with open(sys.argv[1], 'rb') as f: data_to_send = f.read() HOST = 'localhost' PORT = 9999 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print('connecting...') s.connect((HOST, PORT)) print('sending config...') s.send(struct.pack('>L', len(data_to_send))) s.send(data_to_send) s.close() print('complete') Dealing with handlers that block¶ Sometimes you have to get your logging handlers to do their work without blocking the thread you’re logging from. This is common in web applications, though of course it also occurs in other scenarios. A common culprit which demonstrates sluggish behaviour is the SMTPHandler: sending emails can take a long time, for a number of reasons outside the developer’s control (for example, a poorly performing mail or network infrastructure). But almost any network-based handler can block: Even a SocketHandler operation may do a DNS query under the hood which is too slow (and this query can be deep in the socket library code, below the Python layer, and outside your control). One solution is to use a two-part approach. For the first part, attach only a QueueHandler to those loggers which are accessed from performance-critical threads. They simply write to their queue, which can be sized to a large enough capacity or initialized with no upper bound to their size. The write to the queue will typically be accepted quickly, though you will probably need to catch the queue.Full exception as a precaution in your code. If you are a library developer who has performance-critical threads in their code, be sure to document this (together with a suggestion to attach only QueueHandlers to your loggers) for the benefit of other developers who will use your code. The second part of the solution is QueueListener, which has been designed as the counterpart to QueueHandler. A QueueListener is very simple: it’s passed a queue and some handlers, and it fires up an internal thread which listens to its queue for LogRecords sent from QueueHandlers (or any other source of LogRecords, for that matter). The LogRecords are removed from the queue and passed to the handlers for processing. The advantage of having a separate QueueListener class is that you can use the same instance to service multiple QueueHandlers. This is more resource-friendly than, say, having threaded versions of the existing handler classes, which would eat up one thread per handler for no particular benefit. An example of using these two classes follows (imports omitted): que = queue.Queue(-1) # no limit on size queue_handler = QueueHandler(que) handler = logging.StreamHandler() listener = QueueListener(que, handler) root = logging.getLogger() root.addHandler(queue_handler) formatter = logging.Formatter('%(threadName)s: %(message)s') handler.setFormatter(formatter) listener.start() # The log output will display the thread which generated # the event (the main thread) rather than the internal # thread which monitors the internal queue. This is what # you want to happen. root.warning('Look out!') listener.stop() which, when run, will produce: MainThread: Look out! Note Although the earlier discussion wasn’t specifically talking about async code, but rather about slow logging handlers, it should be noted that when logging from async code, network and even file handlers could lead to problems (blocking the event loop) because some logging is done from asyncio internals. It might be best, if any async code is used in an application, to use the above approach for logging, so that any blocking code runs only in the QueueListener thread. Changed in version 3.5: Prior to Python 3.5, the QueueListener always passed every message received from the queue to every handler it was initialized with. (This was because it was assumed that level filtering was all done on the other side, where the queue is filled.) From 3.5 onwards, this behaviour can be changed by passing a keyword argument respect_handler_level=True to the listener’s constructor. When this is done, the listener compares the level of each message with the handler’s level, and only passes a message to a handler if it’s appropriate to do so. Sending and receiving logging events across a network¶ Let’s say you want to send logging events across a network, and handle them at the receiving end. A simple way of doing this is attaching a SocketHandler instance to the root logger at the sending end: import logging, logging.handlers rootLogger = logging.getLogger('') rootLogger.setLevel(logging.DEBUG) socketHandler = logging.handlers.SocketHandler('localhost', logging.handlers.DEFAULT_TCP_LOGGING_PORT) # don't bother with a formatter, since a socket handler sends the event as # an unformatted pickle rootLogger.addHandler(socketHandler) # Now, we can log to the root logger, or any other logger. First the root... logging.info('Jackdaws love my big sphinx of quartz.') # Now, define a couple of other loggers which might represent areas in your # application: logger1 = logging.getLogger('myapp.area1') logger2 = logging.getLogger('myapp.area2') logger1.debug('Quick zephyrs blow, vexing daft Jim.') logger1.info('How quickly daft jumping zebras vex.') logger2.warning('Jail zesty vixen who grabbed pay from quack.') logger2.error('The five boxing wizards jump quickly.') At the receiving end, you can set up a receiver using the socketserver module. Here is a basic working example: import pickle import logging import logging.handlers import socketserver import struct class LogRecordStreamHandler(socketserver.StreamRequestHandler): """Handler for a streaming logging request. This basically logs the record using whatever logging policy is configured locally. """ def handle(self): """ Handle multiple requests - each expected to be a 4-byte length, followed by the LogRecord in pickle format. Logs the record according to whatever policy is configured locally. """ while True: chunk = self.connection.recv(4) if len(chunk) < 4: break slen = struct.unpack('>L', chunk)[0] chunk = self.connection.recv(slen) while len(chunk) < slen: chunk = chunk + self.connection.recv(slen - len(chunk)) obj = self.unPickle(chunk) record = logging.makeLogRecord(obj) self.handleLogRecord(record) def unPickle(self, data): return pickle.loads(data) def handleLogRecord(self, record): # if a name is specified, we use the named logger rather than the one # implied by the record. if self.server.logname is not None: name = self.server.logname else: name = record.name logger = logging.getLogger(name) # N.B. EVERY record gets logged. This is because Logger.handle # is normally called AFTER logger-level filtering. If you want # to do filtering, do it at the client end to save wasting # cycles and network bandwidth! logger.handle(record) class LogRecordSocketReceiver(socketserver.ThreadingTCPServer): """ Simple TCP socket-based logging receiver suitable for testing. """ allow_reuse_address = True def __init__(self, host='localhost', port=logging.handlers.DEFAULT_TCP_LOGGING_PORT, handler=LogRecordStreamHandler): socketserver.ThreadingTCPServer.__init__(self, (host, port), handler) self.abort = 0 self.timeout = 1 self.logname = None def serve_until_stopped(self): import select abort = 0 while not abort: rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout) if rd: self.handle_request() abort = self.abort def main(): logging.basicConfig( format='%(relativeCreated)5d%(name)-15s%(levelname)-8s%(message)s') tcpserver = LogRecordSocketReceiver() print('About to start TCP server...') tcpserver.serve_until_stopped() if __name__ == '__main__': main() First run the server, and then the client. On the client side, nothing is printed on the console; on the server side, you should see something like: About to start TCP server... 59 root INFO Jackdaws love my big sphinx of quartz. 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim. 69 myapp.area1 INFO How quickly daft jumping zebras vex. 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack. 69 myapp.area2 ERROR The five boxing wizards jump quickly. Note that there are some security issues with pickle in some scenarios. If these affect you, you can use an alternative serialization scheme by overriding the makePickle() method and implementing your alternative there, as well as adapting the above script to use your alternative serialization. Running a logging socket listener in production¶ To run a logging listener in production, you may need to use a process-management tool such as Supervisor. Here is a Gist which provides the bare-bones files to run the above functionality using Supervisor. It consists of the following files: File Purpose The web application uses Gunicorn, which is a popular web application server that starts multiple worker processes to handle requests. This example setup shows how the workers can write to the same log file without conflicting with one another — they all go through the socket listener. To test these files, do the following in a POSIX environment: Download the Gist as a ZIP archive using the Download ZIP button. Unzip the above files from the archive into a scratch directory. In the scratch directory, run bash prepare.sh to get things ready. This creates a run subdirectory to contain Supervisor-related and log files, and a venv subdirectory to contain a virtual environment into which bottle, gunicorn and supervisor are installed. Run bash ensure_app.sh to ensure that Supervisor is running with the above configuration. Run venv/bin/python client.py to exercise the web application, which will lead to records being written to the log. Inspect the log files in the run subdirectory. You should see the most recent log lines in files matching the pattern app.log*. They won’t be in any particular order, since they have been handled concurrently by different worker processes in a non-deterministic way. You can shut down the listener and the web application by running venv/bin/supervisorctl -c supervisor.conf shutdown. You may need to tweak the configuration files in the unlikely event that the configured ports clash with something else in your test environment. Adding contextual information to your logging output¶ Sometimes you want logging output to contain contextual information in addition to the parameters passed to the logging call. For example, in a networked application, it may be desirable to log client-specific information in the log (e.g. remote client’s username, or IP address). Although you could use the extra parameter to achieve this, it’s not always convenient to pass the information in this way. While it might be tempting to create Logger instances on a per-connection basis, this is not a good idea because these instances are not garbage collected. While this is not a problem in practice, when the number of Logger instances is dependent on the level of granularity you want to use in logging an application, it could be hard to manage if the number of Logger instances becomes effectively unbounded. Using LoggerAdapters to impart contextual information¶ An easy way in which you can pass contextual information to be output along with logging event information is to use the LoggerAdapter class. This class is designed to look like a Logger, so that you can call debug(), info(), warning(), error(), exception(), critical() and log(). These methods have the same signatures as their counterparts in Logger, so you can use the two types of instances interchangeably. When you create an instance of LoggerAdapter, you pass it a Logger instance and a dict-like object which contains your contextual information. When you call one of the logging methods on an instance of LoggerAdapter, it delegates the call to the underlying instance of Logger passed to its constructor, and arranges to pass the contextual information in the delegated call. Here’s a snippet from the code of LoggerAdapter: def debug(self, msg, /, *args, **kwargs): """ Delegate a debug call to the underlying logger, after adding contextual information from this adapter instance. """ msg, kwargs = self.process(msg, kwargs) self.logger.debug(msg, *args, **kwargs) The process() method of LoggerAdapter is where the contextual information is added to the logging output. It’s passed the message and keyword arguments of the logging call, and it passes back (potentially) modified versions of these to use in the call to the underlying logger. The default implementation of this method leaves the message alone, but inserts an ‘extra’ key in the keyword argument whose value is the dict-like object passed to the constructor. Of course, if you had passed an ‘extra’ keyword argument in the call to the adapter, it will be silently overwritten. The advantage of using ‘extra’ is that the values in the dict-like object are merged into the LogRecord instance’s __dict__, allowing you to use customized strings with your Formatter instances which know about the keys of the dict-like object. If you need a different method, e.g. if you want to prepend or append the contextual information to the message string, you just need to subclass LoggerAdapter and override process() to do what you need. Here is a simple example: class CustomAdapter(logging.LoggerAdapter): """ This example adapter expects the passed in dict-like object to have a 'connid' key, whose value in brackets is prepended to the log message. """ def process(self, msg, kwargs): return '[%s] %s' % (self.extra['connid'], msg), kwargs which you can use like this: logger = logging.getLogger(__name__) adapter = CustomAdapter(logger, {'connid': some_conn_id}) Then any events that you log to the adapter will have the value of some_conn_id prepended to the log messages. Using objects other than dicts to pass contextual information¶ You don’t need to pass an actual dict to a LoggerAdapter - you could pass an instance of a class which implements __getitem__ and __iter__ so that it looks like a dict to logging. This would be useful if you want to generate values dynamically (whereas the values in a dict would be constant). Using Filters to impart contextual information¶ You can also add contextual information to log output using a user-defined Filter. Filter instances are allowed to modify the LogRecords passed to them, including adding additional attributes which can then be output using a suitable format string, or if needed a custom Formatter. For example in a web application, the request being processed (or at least, the interesting parts of it) can be stored in a threadlocal (threading.local) variable, and then accessed from a Filter to add, say, information from the request - say, the remote IP address and remote user’s username - to the LogRecord, using the attribute names ‘ip’ and ‘user’ as in the LoggerAdapter example above. In that case, the same format string can be used to get similar output to that shown above. Here’s an example script: import logging from random import choice class ContextFilter(logging.Filter): """ This is a filter which injects contextual information into the log. Rather than use actual contextual information, we just use random data in this demo. """ USERS = ['jim', 'fred', 'sheila'] IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1'] def filter(self, record): record.ip = choice(ContextFilter.IPS) record.user = choice(ContextFilter.USERS) return True if __name__ == '__main__': levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL) logging.basicConfig(level=logging.DEBUG, format='%(asctime)-15s%(name)-5s%(levelname)-8s IP: %(ip)-15s User: %(user)-8s%(message)s') a1 = logging.getLogger('a.b.c') a2 = logging.getLogger('d.e.f') f = ContextFilter() a1.addFilter(f) a2.addFilter(f) a1.debug('A debug message') a1.info('An info message with %s', 'some parameters') for x in range(10): lvl = choice(levels) lvlname = logging.getLevelName(lvl) a2.log(lvl, 'A message at %s level with %d%s', lvlname, 2, 'parameters') which, when run, produces something like: 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters Use of contextvars¶ Since Python 3.7, the contextvars module has provided context-local storage which works for both threading and asyncio processing needs. This type of storage may thus be generally preferable to thread-locals. The following example shows how, in a multi-threaded environment, logs can populated with contextual information such as, for example, request attributes handled by web applications. For the purposes of illustration, say that you have different web applications, each independent of the other but running in the same Python process and using a library common to them. How can each of these applications have their own log, where all logging messages from the library (and other request processing code) are directed to the appropriate application’s log file, while including in the log additional contextual information such as client IP, HTTP request method and client username? Let’s assume that the library can be simulated by the following code: # webapplib.py import logging import time logger = logging.getLogger(__name__) def useful(): # Just a representative event logged from the library logger.debug('Hello from webapplib!') # Just sleep for a bit so other threads get to run time.sleep(0.01) We can simulate the multiple web applications by means of two simple classes, Request and WebApp. These simulate how real threaded web applications work - each request is handled by a thread: # main.py import argparse from contextvars import ContextVar import logging import os from random import choice import threading import webapplib logger = logging.getLogger(__name__) root = logging.getLogger() root.setLevel(logging.DEBUG) class Request: """ A simple dummy request class which just holds dummy HTTP request method, client IP address and client username """ def __init__(self, method, ip, user): self.method = method self.ip = ip self.user = user # A dummy set of requests which will be used in the simulation - we'll just pick # from this list randomly. Note that all GET requests are from 192.168.2.XXX # addresses, whereas POST requests are from 192.16.3.XXX addresses. Three users # are represented in the sample requests. REQUESTS = [ Request('GET', '192.168.2.20', 'jim'), Request('POST', '192.168.3.20', 'fred'), Request('GET', '192.168.2.21', 'sheila'), Request('POST', '192.168.3.21', 'jim'), Request('GET', '192.168.2.22', 'fred'), Request('POST', '192.168.3.22', 'sheila'), ] # Note that the format string includes references to request context information # such as HTTP method, client IP and username formatter = logging.Formatter('%(threadName)-11s%(appName)s%(name)-9s%(user)-6s%(ip)s%(method)-4s%(message)s') # Create our context variables. These will be filled at the start of request # processing, and used in the logging that happens during that processing ctx_request = ContextVar('request') ctx_appname = ContextVar('appname') class InjectingFilter(logging.Filter): """ A filter which injects context-specific information into logs and ensures that only information for a specific webapp is included in its log """ def __init__(self, app): self.app = app def filter(self, record): request = ctx_request.get() record.method = request.method record.ip = request.ip record.user = request.user record.appName = appName = ctx_appname.get() return appName == self.app.name class WebApp: """ A dummy web application class which has its own handler and filter for a webapp-specific log. """ def __init__(self, name): self.name = name handler = logging.FileHandler(name + '.log', 'w') f = InjectingFilter(self) handler.setFormatter(formatter) handler.addFilter(f) root.addHandler(handler) self.num_requests = 0 def process_request(self, request): """ This is the dummy method for processing a request. It's called on a different thread for every request. We store the context information into the context vars before doing anything else. """ ctx_request.set(request) ctx_appname.set(self.name) self.num_requests += 1 logger.debug('Request processing started') webapplib.useful() logger.debug('Request processing finished') def main(): fn = os.path.splitext(os.path.basename(__file__))[0] adhf = argparse.ArgumentDefaultsHelpFormatter ap = argparse.ArgumentParser(formatter_class=adhf, prog=fn, description='Simulate a couple of web ' 'applications handling some ' 'requests, showing how request ' 'context can be used to ' 'populate logs') aa = ap.add_argument aa('--count', '-c', type=int, default=100, help='How many requests to simulate') options = ap.parse_args() # Create the dummy webapps and put them in a list which we can use to select # from randomly app1 = WebApp('app1') app2 = WebApp('app2') apps = [app1, app2] threads = [] # Add a common handler which will capture all events handler = logging.FileHandler('app.log', 'w') handler.setFormatter(formatter) root.addHandler(handler) # Generate calls to process requests for i in range(options.count): try: # Pick an app at random and a request for it to process app = choice(apps) request = choice(REQUESTS) # Process the request in its own thread t = threading.Thread(target=app.process_request, args=(request,)) threads.append(t) t.start() except KeyboardInterrupt: break # Wait for the threads to terminate for t in threads: t.join() for app in apps: print('%s processed %s requests' % (app.name, app.num_requests)) if __name__ == '__main__': main() If you run the above, you should find that roughly half the requests go into app1.log and the rest into app2.log, and the all the requests are logged to app.log. Each webapp-specific log will contain only log entries for only that webapp, and the request information will be displayed consistently in the log (i.e. the information in each dummy request will always appear together in a log line). This is illustrated by the following shell output: ~/logging-contextual-webapp$ python main.py app1 processed51 requests app2 processed49 requests ~/logging-contextual-webapp$ wc -l *.log 153 app1.log 147 app2.log 300 app.log 600 total ~/logging-contextual-webapp$ head -3 app1.log Thread-3(process_request) app1 __main__ jim192.168.3.21 POST Request processing started Thread-3(process_request) app1 webapplib jim192.168.3.21 POST Hello from webapplib! Thread-5(process_request) app1 __main__ jim192.168.3.21 POST Request processing started ~/logging-contextual-webapp$ head -3 app2.log Thread-1(process_request) app2 __main__ sheila192.168.2.21 GET Request processing started Thread-1(process_request) app2 webapplib sheila192.168.2.21 GET Hello from webapplib! Thread-2(process_request) app2 __main__ jim192.168.2.20 GET Request processing started ~/logging-contextual-webapp$ head app.log Thread-1(process_request) app2 __main__ sheila192.168.2.21 GET Request processing started Thread-1(process_request) app2 webapplib sheila192.168.2.21 GET Hello from webapplib! Thread-2(process_request) app2 __main__ jim192.168.2.20 GET Request processing started Thread-3(process_request) app1 __main__ jim192.168.3.21 POST Request processing started Thread-2(process_request) app2 webapplib jim192.168.2.20 GET Hello from webapplib! Thread-3(process_request) app1 webapplib jim192.168.3.21 POST Hello from webapplib! Thread-4(process_request) app2 __main__ fred192.168.2.22 GET Request processing started Thread-5(process_request) app1 __main__ jim192.168.3.21 POST Request processing started Thread-4(process_request) app2 webapplib fred192.168.2.22 GET Hello from webapplib! Thread-6(process_request) app1 __main__ jim192.168.3.21 POST Request processing started ~/logging-contextual-webapp$ grep app1 app1.log| wc -l 153 ~/logging-contextual-webapp$ grep app2 app2.log| wc -l 147 ~/logging-contextual-webapp$ grep app1 app.log| wc -l 153 ~/logging-contextual-webapp$ grep app2 app.log| wc -l 147 Imparting contextual information in handlers¶ Each Handler has its own chain of filters. If you want to add contextual information to a LogRecord without leaking it to other handlers, you can use a filter that returns a new LogRecord instead of modifying it in-place, as shown in the following script: import copy import logging def filter(record: logging.LogRecord): record = copy.copy(record) record.user = 'jim' return record if __name__ == '__main__': logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter('%(message)s from %(user)-8s') handler.setFormatter(formatter) handler.addFilter(filter) logger.addHandler(handler) logger.info('A log message') Logging to a single file from multiple processes¶ Although logging is thread-safe, and logging to a single file from multiple threads in a single process is supported, logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python. If you need to log to a single file from multiple processes, one way of doing this is to have all the processes log to a SocketHandler, and have a separate process which implements a socket server which reads from the socket and logs to file. (If you prefer, you can dedicate one thread in one of the existing processes to perform this function.) This section documents this approach in more detail and includes a working socket receiver which can be used as a starting point for you to adapt in your own applications. You could also write your own handler which uses the Lock class from the multiprocessing module to serialize access to the file from your processes. The existing FileHandler and subclasses do not make use of multiprocessing at present, though they may do so in the future. Note that at present, the multiprocessing module does not provide working lock functionality on all platforms (see https://bugs.python.org/issue3770). Alternatively, you can use a Queue and a QueueHandler to send all logging events to one of the processes in your multi-process application. The following example script demonstrates how you can do this; in the example a separate listener process listens for events sent by other processes and logs them according to its own logging configuration. Although the example only demonstrates one way of doing it (for example, you may want to use a listener thread rather than a separate listener process – the implementation would be analogous) it does allow for completely different logging configurations for the listener and the other processes in your application, and can be used as the basis for code meeting your own specific requirements: # You'll need these imports in your own code import logging import logging.handlers import multiprocessing # Next two import lines for this demo only from random import choice, random import time # # Because you'll want to define the logging configurations for listener and workers, the # listener and worker process functions take a configurer parameter which is a callable # for configuring logging for that process. These functions are also passed the queue, # which they use for communication. # # In practice, you can configure the listener however you want, but note that in this # simple example, the listener does not apply level or filter logic to received records. # In practice, you would probably want to do this logic in the worker processes, to avoid # sending events which would be filtered out between processes. # # The size of the rotated files is made small so you can see the results easily. def listener_configurer(): root = logging.getLogger() h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10) f = logging.Formatter('%(asctime)s%(processName)-10s%(name)s%(levelname)-8s%(message)s') h.setFormatter(f) root.addHandler(h) # This is the listener process top-level loop: wait for logging events # (LogRecords)on the queue and handle them, quit when you get a None for a # LogRecord. def listener_process(queue, configurer): configurer() while True: try: record = queue.get() if record is None: # We send this as a sentinel to tell the listener to quit. break logger = logging.getLogger(record.name) logger.handle(record) # No level or filter logic applied - just do it! except Exception: import sys, traceback print('Whoops! Problem:', file=sys.stderr) traceback.print_exc(file=sys.stderr) # Arrays used for random selections in this demo LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] LOGGERS = ['a.b.c', 'd.e.f'] MESSAGES = [ 'Random message #1', 'Random message #2', 'Random message #3', ] # The worker configuration is done at the start of the worker process run. # Note that on Windows you can't rely on fork semantics, so each process # will run the logging configuration code when it starts. def worker_configurer(queue): h = logging.handlers.QueueHandler(queue) # Just the one handler needed root = logging.getLogger() root.addHandler(h) # send all messages, for demo; no other level or filter logic applied. root.setLevel(logging.DEBUG) # This is the worker process top-level loop, which just logs ten events with # random intervening delays before terminating. # The print messages are just so you know it's doing something! def worker_process(queue, configurer): configurer(queue) name = multiprocessing.current_process().name print('Worker started: %s' % name) for i in range(10): time.sleep(random()) logger = logging.getLogger(choice(LOGGERS)) level = choice(LEVELS) message = choice(MESSAGES) logger.log(level, message) print('Worker finished: %s' % name) # Here's where the demo gets orchestrated. Create the queue, create and start # the listener, create ten workers and start them, wait for them to finish, # then send a None to the queue to tell the listener to finish. def main(): queue = multiprocessing.Queue(-1) listener = multiprocessing.Process(target=listener_process, args=(queue, listener_configurer)) listener.start() workers = [] for i in range(10): worker = multiprocessing.Process(target=worker_process, args=(queue, worker_configurer)) workers.append(worker) worker.start() for w in workers: w.join() queue.put_nowait(None) listener.join() if __name__ == '__main__': main() A variant of the above script keeps the logging in the main process, in a separate thread: import logging import logging.config import logging.handlers from multiprocessing import Process, Queue import random import threading import time def logger_thread(q): while True: record = q.get() if record is None: break logger = logging.getLogger(record.name) logger.handle(record) def worker_process(q): qh = logging.handlers.QueueHandler(q) root = logging.getLogger() root.setLevel(logging.DEBUG) root.addHandler(qh) levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] loggers = ['foo', 'foo.bar', 'foo.bar.baz', 'spam', 'spam.ham', 'spam.ham.eggs'] for i in range(100): lvl = random.choice(levels) logger = logging.getLogger(random.choice(loggers)) logger.log(lvl, 'Message no. %d', i) if __name__ == '__main__': q = Queue() d = { 'version': 1, 'formatters': { 'detailed': { 'class': 'logging.Formatter', 'format': '%(asctime)s%(name)-15s%(levelname)-8s%(processName)-10s%(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', }, 'file': { 'class': 'logging.FileHandler', 'filename': 'mplog.log', 'mode': 'w', 'formatter': 'detailed', }, 'foofile': { 'class': 'logging.FileHandler', 'filename': 'mplog-foo.log', 'mode': 'w', 'formatter': 'detailed', }, 'errors': { 'class': 'logging.FileHandler', 'filename': 'mplog-errors.log', 'mode': 'w', 'level': 'ERROR', 'formatter': 'detailed', }, }, 'loggers': { 'foo': { 'handlers': ['foofile'] } }, 'root': { 'level': 'DEBUG', 'handlers': ['console', 'file', 'errors'] }, } workers = [] for i in range(5): wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,)) workers.append(wp) wp.start() logging.config.dictConfig(d) lp = threading.Thread(target=logger_thread, args=(q,)) lp.start() # At this point, the main process could do some useful work of its own # Once it's done that, it can wait for the workers to terminate... for wp in workers: wp.join() # And now tell the logging thread to finish up, too q.put(None) lp.join() This variant shows how you can e.g. apply configuration for particular loggers - e.g. the foo logger has a special handler which stores all events in the foo subsystem in a file mplog-foo.log. This will be used by the logging machinery in the main process (even though the logging events are generated in the worker processes) to direct the messages to the appropriate destinations. Using concurrent.futures.ProcessPoolExecutor¶ If you want to use concurrent.futures.ProcessPoolExecutor to start your worker processes, you need to create the queue slightly differently. Instead of queue = multiprocessing.Queue(-1) you should use queue = multiprocessing.Manager().Queue(-1) # also works with the examples above and you can then replace the worker creation from this: workers = [] for i in range(10): worker = multiprocessing.Process(target=worker_process, args=(queue, worker_configurer)) workers.append(worker) worker.start() for w in workers: w.join() to this (remembering to first import concurrent.futures): with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor: for i in range(10): executor.submit(worker_process, queue, worker_configurer) Deploying Web applications using Gunicorn and uWSGI¶ When deploying Web applications using Gunicorn or uWSGI (or similar), multiple worker processes are created to handle client requests. In such environments, avoid creating file-based handlers directly in your web application. Instead, use a SocketHandler to log from the web application to a listener in a separate process. This can be set up using a process management tool such as Supervisor - see Running a logging socket listener in production for more details. Using file rotation¶ Sometimes you want to let a log file grow to a certain size, then open a new file and log to that. You may want to keep a certain number of these files, and when that many files have been created, rotate the files so that the number of files and the size of the files both remain bounded. For this usage pattern, the logging package provides a RotatingFileHandler: import glob import logging import logging.handlers LOG_FILENAME = 'logging_rotatingfile_example.out' # Set up a specific logger with our desired output level my_logger = logging.getLogger('MyLogger') my_logger.setLevel(logging.DEBUG) # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler( LOG_FILENAME, maxBytes=20, backupCount=5) my_logger.addHandler(handler) # Log some messages for i in range(20): my_logger.debug('i = %d' % i) # See what files are created logfiles = glob.glob('%s*' % LOG_FILENAME) for filename in logfiles: print(filename) The result should be 6 separate files, each with part of the log history for the application: logging_rotatingfile_example.out logging_rotatingfile_example.out.1 logging_rotatingfile_example.out.2 logging_rotatingfile_example.out.3 logging_rotatingfile_example.out.4 logging_rotatingfile_example.out.5 The most current file is always logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the suffix .1. Each of the existing backup files is renamed to increment the suffix (.1 becomes .2, etc.) and the .6 file is erased. Obviously this example sets the log length much too small as an extreme example. You would want to set maxBytes to an appropriate value. Use of alternative formatting styles¶ When logging was added to the Python standard library, the only way of formatting messages with variable content was to use the %-formatting method. Since then, Python has gained two new formatting approaches: string.Template (added in Python 2.4) and str.format() (added in Python 2.6). Logging (as of 3.2) provides improved support for these two additional formatting styles. The Formatter class been enhanced to take an additional, optional keyword parameter named style. This defaults to '%', but other possible values are '{' and '$', which correspond to the other two formatting styles. Backwards compatibility is maintained by default (as you would expect), but by explicitly specifying a style parameter, you get the ability to specify format strings which work with str.format() or string.Template. Here’s an example console session to show the possibilities: >>> import logging >>> root = logging.getLogger() >>> root.setLevel(logging.DEBUG) >>> handler = logging.StreamHandler() >>> bf = logging.Formatter('{asctime}{name}{levelname:8s}{message}', ... style='{') >>> handler.setFormatter(bf) >>> root.addHandler(handler) >>> logger = logging.getLogger('foo.bar') >>> logger.debug('This is a DEBUG message') 2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message >>> logger.critical('This is a CRITICAL message') 2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message >>> df = logging.Formatter('$asctime $name ${levelname} $message', ... style='$') >>> handler.setFormatter(df) >>> logger.debug('This is a DEBUG message') 2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message >>> logger.critical('This is a CRITICAL message') 2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message >>> Note that the formatting of logging messages for final output to logs is completely independent of how an individual logging message is constructed. That can still use %-formatting, as shown here: >>> logger.error('This is an%s%s%s', 'other,', 'ERROR,', 'message') 2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message >>> Logging calls (logger.debug(), logger.info() etc.) only take positional parameters for the actual logging message itself, with keyword parameters used only for determining options for how to handle the actual logging call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using str.format() or string.Template syntax, because internally the logging package uses %-formatting to merge the format string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. There is, however, a way that you can use {}- and $- formatting to construct your individual log messages. Recall that for a message you can use an arbitrary object as a message format string, and that the logging package will call str() on that object to get the actual format string. Consider the following two classes: class BraceMessage: def __init__(self, fmt, /, *args, **kwargs): self.fmt = fmt self.args = args self.kwargs = kwargs def __str__(self): return self.fmt.format(*self.args, **self.kwargs) class DollarMessage: def __init__(self, fmt, /, **kwargs): self.fmt = fmt self.kwargs = kwargs def __str__(self): from string import Template return Template(self.fmt).substitute(**self.kwargs) Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual “message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”. It’s a little unwieldy to use the class names whenever you want to log something, but it’s quite palatable if you use an alias such as __ (double underscore — not to be confused with _, the single underscore used as a synonym/alias for gettext.gettext() or its brethren). The above classes are not included in Python, though they’re easy enough to copy and paste into your own code. They can be used as follows (assuming that they’re declared in a module called wherever): >>> from wherever import BraceMessage as __ >>> print(__('Message with {0}{name}', 2, name='placeholders')) Message with 2 placeholders >>> class Point: pass ... >>> p = Point() >>> p.x = 0.5 >>> p.y = 0.5 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', ... point=p)) Message with coordinates: (0.50, 0.50) >>> from wherever import DollarMessage as __ >>> print(__('Message with $num $what', num=2, what='placeholders')) Message with 2 placeholders >>> While the above examples use print() to show how the formatting works, you would of course use logger.debug() or similar to actually log using this approach. One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call to one of the XXXMessage classes. If you prefer, you can use a LoggerAdapter to achieve a similar effect to the above, as in the following example: import logging class Message: def __init__(self, fmt, args): self.fmt = fmt self.args = args def __str__(self): return self.fmt.format(*self.args) class StyleAdapter(logging.LoggerAdapter): def log(self, level, msg, /, *args, stacklevel=1, **kwargs): if self.isEnabledFor(level): msg, kwargs = self.process(msg, kwargs) self.logger.log(level, Message(msg, args), **kwargs, stacklevel=stacklevel+1) logger = StyleAdapter(logging.getLogger(__name__)) def main(): logger.debug('Hello, {}', 'world!') if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) main() The above script should log the message Hello, world! when run with Python 3.8 or later. Customizing LogRecord¶ Every logging event is represented by a LogRecord instance. When an event is logged and not filtered out by a logger’s level, a LogRecord is created, populated with information about the event and then passed to the handlers for that logger (and its ancestors, up to and including the logger where further propagation up the hierarchy is disabled). Before Python 3.2, there were only two places where this creation was done: Logger.makeRecord(), which is called in the normal process of logging an event. This invoked LogRecord directly to create an instance. makeLogRecord(), which is called with a dictionary containing attributes to be added to the LogRecord. This is typically invoked when a suitable dictionary has been received over the network (e.g. in pickle form via a SocketHandler, or in JSON form via an HTTPHandler). This has usually meant that if you need to do anything special with a LogRecord, you’ve had to do one of the following. Create your own Logger subclass, which overrides Logger.makeRecord(), and set it using setLoggerClass() before any loggers that you care about are instantiated. Add a Filter to a logger or handler, which does the necessary special manipulation you need when its filter() method is called. The first approach would be a little unwieldy in the scenario where (say) several different libraries wanted to do different things. Each would attempt to set its own Logger subclass, and the one which did this last would win. The second approach works reasonably well for many cases, but does not allow you to e.g. use a specialized subclass of LogRecord. Library developers can set a suitable filter on their loggers, but they would have to remember to do this every time they introduced a new logger (which they would do simply by adding new packages or modules and doing logger = logging.getLogger(__name__) at module level). It’s probably one too many things to think about. Developers could also add the filter to a NullHandler attached to their top-level logger, but this would not be invoked if an application developer attached a handler to a lower-level library logger — so output from that handler would not reflect the intentions of the library developer. In Python 3.2 and later, LogRecord creation is done through a factory, which you can specify. The factory is just a callable you can set with setLogRecordFactory(), and interrogate with getLogRecordFactory(). The factory is invoked with the same signature as the LogRecord constructor, as LogRecord is the default setting for the factory. This approach allows a custom factory to control all aspects of LogRecord creation. For example, you could return a subclass, or just add some additional attributes to the record once created, using a pattern similar to this: old_factory = logging.getLogRecordFactory() def record_factory(*args, **kwargs): record = old_factory(*args, **kwargs) record.custom_attribute = 0xdecafbad return record logging.setLogRecordFactory(record_factory) This pattern allows different libraries to chain factories together, and as long as they don’t overwrite each other’s attributes or unintentionally overwrite the attributes provided as standard, there should be no surprises. However, it should be borne in mind that each link in the chain adds run-time overhead to all logging operations, and the technique should only be used when the use of a Filter does not provide the desired result. Subclassing QueueHandler and QueueListener- a ZeroMQ example¶ Subclass QueueHandler¶ You can use a QueueHandler subclass to send messages to other kinds of queues, for example a ZeroMQ ‘publish’ socket. In the example below,the socket is created separately and passed to the handler (as its ‘queue’): import zmq # using pyzmq, the Python binding for ZeroMQ import json # for serializing records portably ctx = zmq.Context() sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value sock.bind('tcp://*:5556') # or wherever class ZeroMQSocketHandler(QueueHandler): def enqueue(self, record): self.queue.send_json(record.__dict__) handler = ZeroMQSocketHandler(sock) Of course there are other ways of organizing this, for example passing in the data needed by the handler to create the socket: class ZeroMQSocketHandler(QueueHandler): def __init__(self, uri, socktype=zmq.PUB, ctx=None): self.ctx = ctx or zmq.Context() socket = zmq.Socket(self.ctx, socktype) socket.bind(uri) super().__init__(socket) def enqueue(self, record): self.queue.send_json(record.__dict__) def close(self): self.queue.close() Subclass QueueListener¶ You can also subclass QueueListener to get messages from other kinds of queues, for example a ZeroMQ ‘subscribe’ socket. Here’s an example: class ZeroMQSocketListener(QueueListener): def __init__(self, uri, /, *handlers, **kwargs): self.ctx = kwargs.get('ctx') or zmq.Context() socket = zmq.Socket(self.ctx, zmq.SUB) socket.setsockopt_string(zmq.SUBSCRIBE, '') # subscribe to everything socket.connect(uri) super().__init__(socket, *handlers, **kwargs) def dequeue(self): msg = self.queue.recv_json() return logging.makeLogRecord(msg) Subclassing QueueHandler and QueueListener- a pynng example¶ In a similar way to the above section, we can implement a listener and handler using pynng, which is a Python binding to NNG, billed as a spiritual successor to ZeroMQ. The following snippets illustrate – you can test them in an environment which has pynng installed. Just for variety, we present the listener first. Subclass QueueListener¶ # listener.py import json import logging import logging.handlers import pynng DEFAULT_ADDR = "tcp://localhost:13232" interrupted = False class NNGSocketListener(logging.handlers.QueueListener): def __init__(self, uri, /, *handlers, **kwargs): # Have a timeout for interruptability, and open a # subscriber socket socket = pynng.Sub0(listen=uri, recv_timeout=500) # The b'' subscription matches all topics topics = kwargs.pop('topics', None) or b'' socket.subscribe(topics) # We treat the socket as a queue super().__init__(socket, *handlers, **kwargs) def dequeue(self, block): data = None # Keep looping while not interrupted and no data received over the # socket while not interrupted: try: data = self.queue.recv(block=block) break except pynng.Timeout: pass except pynng.Closed: # sometimes happens when you hit Ctrl-C break if data is None: return None # Get the logging event sent from a publisher event = json.loads(data.decode('utf-8')) return logging.makeLogRecord(event) def enqueue_sentinel(self): # Not used in this implementation, as the socket isn't really a # queue pass logging.getLogger('pynng').propagate = False listener = NNGSocketListener(DEFAULT_ADDR, logging.StreamHandler(), topics=b'') listener.start() print('Press Ctrl-C to stop.') try: while True: pass except KeyboardInterrupt: interrupted = True finally: listener.stop() Subclass QueueHandler¶ # sender.py import json import logging import logging.handlers import time import random import pynng DEFAULT_ADDR = "tcp://localhost:13232" class NNGSocketHandler(logging.handlers.QueueHandler): def __init__(self, uri): socket = pynng.Pub0(dial=uri, send_timeout=500) super().__init__(socket) def enqueue(self, record): # Send the record as UTF-8 encoded JSON d = dict(record.__dict__) data = json.dumps(d) self.queue.send(data.encode('utf-8')) def close(self): self.queue.close() logging.getLogger('pynng').propagate = False handler = NNGSocketHandler(DEFAULT_ADDR) # Make sure the process ID is in the output logging.basicConfig(level=logging.DEBUG, handlers=[logging.StreamHandler(), handler], format='%(levelname)-8s%(name)10s%(process)6s%(message)s') levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL) logger_names = ('myapp', 'myapp.lib1', 'myapp.lib2') msgno = 1 while True: # Just randomly select some loggers and levels and log away level = random.choice(levels) logger = logging.getLogger(random.choice(logger_names)) logger.log(level, 'Message no. %5d' % msgno) msgno += 1 delay = random.random() * 2 + 0.5 time.sleep(delay) You can run the above two snippets in separate command shells. If we run the listener in one shell and run the sender in two separate shells, we should see something like the following. In the first sender shell: $ python sender.py DEBUG myapp 613 Message no. 1 WARNING myapp.lib2 613 Message no. 2 CRITICAL myapp.lib2 613 Message no. 3 WARNING myapp.lib2 613 Message no. 4 CRITICAL myapp.lib1 613 Message no. 5 DEBUG myapp 613 Message no. 6 CRITICAL myapp.lib1 613 Message no. 7 INFO myapp.lib1 613 Message no. 8 (and so on) In the second sender shell: $ python sender.py INFO myapp.lib2 657 Message no. 1 CRITICAL myapp.lib2 657 Message no. 2 CRITICAL myapp 657 Message no. 3 CRITICAL myapp.lib1 657 Message no. 4 INFO myapp.lib1 657 Message no. 5 WARNING myapp.lib2 657 Message no. 6 CRITICAL myapp 657 Message no. 7 DEBUG myapp.lib1 657 Message no. 8 (and so on) In the listener shell: $ python listener.py Press Ctrl-C to stop. DEBUG myapp 613 Message no. 1 WARNING myapp.lib2 613 Message no. 2 INFO myapp.lib2 657 Message no. 1 CRITICAL myapp.lib2 613 Message no. 3 CRITICAL myapp.lib2 657 Message no. 2 CRITICAL myapp 657 Message no. 3 WARNING myapp.lib2 613 Message no. 4 CRITICAL myapp.lib1 613 Message no. 5 CRITICAL myapp.lib1 657 Message no. 4 INFO myapp.lib1 657 Message no. 5 DEBUG myapp 613 Message no. 6 WARNING myapp.lib2 657 Message no. 6 CRITICAL myapp 657 Message no. 7 CRITICAL myapp.lib1 613 Message no. 7 INFO myapp.lib1 613 Message no. 8 DEBUG myapp.lib1 657 Message no. 8 (and so on) As you can see, the logging from the two sender processes is interleaved in the listener’s output. An example dictionary-based configuration¶ Below is an example of a logging configuration dictionary - it’s taken from the documentation on the Django project. This dictionary is passed to dictConfig() to put the configuration into effect: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname}{asctime}{module}{process:d}{thread:d}{message}', 'style': '{', }, 'simple': { 'format': '{levelname}{message}', 'style': '{', }, }, 'filters': { 'special': { '()': 'project.logging.SpecialFilter', 'foo': 'bar', }, }, 'handlers': { 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'simple', }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'filters': ['special'] } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myproject.custom': { 'handlers': ['console', 'mail_admins'], 'level': 'INFO', 'filters': ['special'] } } } For more information about this configuration, you can see the relevant section of the Django documentation. Using a rotator and namer to customize log rotation processing¶ An example of how you can define a namer and rotator is given in the following runnable script, which shows gzip compression of the log file: import gzip import logging import logging.handlers import os import shutil def namer(name): return name + ".gz" def rotator(source, dest): with open(source, 'rb') as f_in: with gzip.open(dest, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) os.remove(source) rh = logging.handlers.RotatingFileHandler('rotated.log', maxBytes=128, backupCount=5) rh.rotator = rotator rh.namer = namer root = logging.getLogger() root.setLevel(logging.INFO) root.addHandler(rh) f = logging.Formatter('%(asctime)s%(message)s') rh.setFormatter(f) for i in range(1000): root.info(f'Message no. {i+1}') After running this, you will see six new files, five of which are compressed: $ ls rotated.log* rotated.log rotated.log.2.gz rotated.log.4.gz rotated.log.1.gz rotated.log.3.gz rotated.log.5.gz $ zcat rotated.log.1.gz 2023-01-20 02:28:17,767 Message no. 996 2023-01-20 02:28:17,767 Message no. 997 2023-01-20 02:28:17,767 Message no. 998 A more elaborate multiprocessing example¶ The following working example shows how logging can be used with multiprocessing using configuration files. The configurations are fairly simple, but serve to illustrate how more complex ones could be implemented in a real multiprocessing scenario. In the example, the main process spawns a listener process and some worker processes. Each of the main process, the listener and the workers have three separate configurations (the workers all share the same configuration). We can see logging in the main process, how the workers log to a QueueHandler and how the listener implements a QueueListener and a more complex logging configuration, and arranges to dispatch events received via the queue to the handlers specified in the configuration. Note that these configurations are purely illustrative, but you should be able to adapt this example to your own scenario. Here’s the script - the docstrings and the comments hopefully explain how it works: import logging import logging.config import logging.handlers from multiprocessing import Process, Queue, Event, current_process import os import random import time class MyHandler: """ A simple handler for logging events. It runs in the listener process and dispatches events to loggers based on the name in the received record, which then get dispatched, by the logging system, to the handlers configured for those loggers. """ def handle(self, record): if record.name == "root": logger = logging.getLogger() else: logger = logging.getLogger(record.name) if logger.isEnabledFor(record.levelno): # The process name is transformed just to show that it's the listener # doing the logging to files and console record.processName = '%s (for %s)' % (current_process().name, record.processName) logger.handle(record) def listener_process(q, stop_event, config): """ This could be done in the main process, but is just done in a separate process for illustrative purposes. This initialises logging according to the specified configuration, starts the listener and waits for the main process to signal completion via the event. The listener is then stopped, and the process exits. """ logging.config.dictConfig(config) listener = logging.handlers.QueueListener(q, MyHandler()) listener.start() if os.name == 'posix': # On POSIX, the setup logger will have been configured in the # parent process, but should have been disabled following the # dictConfig call. # On Windows, since fork isn't used, the setup logger won't # exist in the child, so it would be created and the message # would appear - hence the "if posix" clause. logger = logging.getLogger('setup') logger.critical('Should not appear, because of disabled logger ...') stop_event.wait() listener.stop() def worker_process(config): """ A number of these are spawned for the purpose of illustration. In practice, they could be a heterogeneous bunch of processes rather than ones which are identical to each other. This initialises logging according to the specified configuration, and logs a hundred messages with random levels to randomly selected loggers. A small sleep is added to allow other processes a chance to run. This is not strictly needed, but it mixes the output from the different processes a bit more than if it's left out. """ logging.config.dictConfig(config) levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] loggers = ['foo', 'foo.bar', 'foo.bar.baz', 'spam', 'spam.ham', 'spam.ham.eggs'] if os.name == 'posix': # On POSIX, the setup logger will have been configured in the # parent process, but should have been disabled following the # dictConfig call. # On Windows, since fork isn't used, the setup logger won't # exist in the child, so it would be created and the message # would appear - hence the "if posix" clause. logger = logging.getLogger('setup') logger.critical('Should not appear, because of disabled logger ...') for i in range(100): lvl = random.choice(levels) logger = logging.getLogger(random.choice(loggers)) logger.log(lvl, 'Message no. %d', i) time.sleep(0.01) def main(): q = Queue() # The main process gets a simple configuration which prints to the console. config_initial = { 'version': 1, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO' } }, 'root': { 'handlers': ['console'], 'level': 'DEBUG' } } # The worker process configuration is just a QueueHandler attached to the # root logger, which allows all messages to be sent to the queue. # We disable existing loggers to disable the "setup" logger used in the # parent process. This is needed on POSIX because the logger will # be there in the child following a fork(). config_worker = { 'version': 1, 'disable_existing_loggers': True, 'handlers': { 'queue': { 'class': 'logging.handlers.QueueHandler', 'queue': q } }, 'root': { 'handlers': ['queue'], 'level': 'DEBUG' } } # The listener process configuration shows that the full flexibility of # logging configuration is available to dispatch events to handlers however # you want. # We disable existing loggers to disable the "setup" logger used in the # parent process. This is needed on POSIX because the logger will # be there in the child following a fork(). config_listener = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'detailed': { 'class': 'logging.Formatter', 'format': '%(asctime)s%(name)-15s%(levelname)-8s%(processName)-10s%(message)s' }, 'simple': { 'class': 'logging.Formatter', 'format': '%(name)-15s%(levelname)-8s%(processName)-10s%(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'formatter': 'simple', 'level': 'INFO' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'mplog.log', 'mode': 'w', 'formatter': 'detailed' }, 'foofile': { 'class': 'logging.FileHandler', 'filename': 'mplog-foo.log', 'mode': 'w', 'formatter': 'detailed' }, 'errors': { 'class': 'logging.FileHandler', 'filename': 'mplog-errors.log', 'mode': 'w', 'formatter': 'detailed', 'level': 'ERROR' } }, 'loggers': { 'foo': { 'handlers': ['foofile'] } }, 'root': { 'handlers': ['console', 'file', 'errors'], 'level': 'DEBUG' } } # Log some initial events, just to show that logging in the parent works # normally. logging.config.dictConfig(config_initial) logger = logging.getLogger('setup') logger.info('About to create workers ...') workers = [] for i in range(5): wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(config_worker,)) workers.append(wp) wp.start() logger.info('Started worker: %s', wp.name) logger.info('About to create listener ...') stop_event = Event() lp = Process(target=listener_process, name='listener', args=(q, stop_event, config_listener)) lp.start() logger.info('Started listener') # We now hang around for the workers to finish their work. for wp in workers: wp.join() # Workers all done, listening can now stop. # Logging in the parent still works normally. logger.info('Telling listener to stop ...') stop_event.set() lp.join() logger.info('All done.') if __name__ == '__main__': main() Inserting a BOM into messages sent to a SysLogHandler¶ RFC 5424 requires that a Unicode message be sent to a syslog daemon as a set of bytes which have the following structure: an optional pure-ASCII component, followed by a UTF-8 Byte Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the relevant section of the specification.) In Python 3.1, code was added to SysLogHandler to insert a BOM into the message, but unfortunately, it was implemented incorrectly, with the BOM appearing at the beginning of the message and hence not allowing any pure-ASCII component to appear before it. As this behaviour is broken, the incorrect BOM insertion code is being removed from Python 3.2.4 and later. However, it is not being replaced, and if you want to produce RFC 5424-compliant messages which include a BOM, an optional pure-ASCII sequence before it and arbitrary Unicode after it, encoded using UTF-8, then you need to do the following: Attach a Formatter instance to your SysLogHandler instance, with a format string such as: 'ASCII section\ufeffUnicode section' The Unicode code point U+FEFF, when encoded using UTF-8, will be encoded as a UTF-8 BOM – the byte-string b'\xef\xbb\xbf'. Replace the ASCII section with whatever placeholders you like, but make sure that the data that appears in there after substitution is always ASCII (that way, it will remain unchanged after UTF-8 encoding). Replace the Unicode section with whatever placeholders you like; if the data which appears there after substitution contains characters outside the ASCII range, that’s fine – it will be encoded using UTF-8. The formatted message will be encoded using UTF-8 encoding by SysLogHandler. If you follow the above rules, you should be able to produce RFC 5424-compliant messages. If you don’t, logging may not complain, but your messages will not be RFC 5424-compliant, and your syslog daemon may complain. Implementing structured logging¶ Although most logging messages are intended for reading by humans, and thus not readily machine-parseable, there might be circumstances where you want to output messages in a structured format which is capable of being parsed by a program (without needing complex regular expressions to parse the log message). This is straightforward to achieve using the logging package. There are a number of ways in which this could be achieved, but the following is a simple approach which uses JSON to serialise the event in a machine-parseable manner: import json import logging class StructuredMessage: def __init__(self, message, /, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): return '%s >>> %s' % (self.message, json.dumps(self.kwargs)) _ = StructuredMessage # optional, to improve readability logging.basicConfig(level=logging.INFO, format='%(message)s') logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456)) If the above script is run, it prints: message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"} Note that the order of items might be different according to the version of Python used. If you need more specialised processing, you can use a custom JSON encoder, as in the following complete example: import json import logging class Encoder(json.JSONEncoder): def default(self, o): if isinstance(o, set): return tuple(o) elif isinstance(o, str): return o.encode('unicode_escape').decode('ascii') return super().default(o) class StructuredMessage: def __init__(self, message, /, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): s = Encoder().encode(self.kwargs) return '%s >>> %s' % (self.message, s) _ = StructuredMessage # optional, to improve readability def main(): logging.basicConfig(level=logging.INFO, format='%(message)s') logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603')) if __name__ == '__main__': main() When the above script is run, it prints: message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]} Note that the order of items might be different according to the version of Python used. Customizing handlers with dictConfig()¶ There are times when you want to customize logging handlers in particular ways, and if you use dictConfig() you may be able to do this without subclassing. As an example, consider that you may want to set the ownership of a log file. On POSIX, this is easily done using shutil.chown(), but the file handlers in the stdlib don’t offer built-in support. You can customize handler creation using a plain function such as: def owned_file_handler(filename, mode='a', encoding=None, owner=None): if owner: if not os.path.exists(filename): open(filename, 'a').close() shutil.chown(filename, *owner) return logging.FileHandler(filename, mode, encoding) You can then specify, in a logging configuration passed to dictConfig(), that a logging handler be created by calling this function: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'default': { 'format': '%(asctime)s%(levelname)s%(name)s%(message)s' }, }, 'handlers': { 'file':{ # The values below are popped from this dictionary and # used to create the handler, set the handler's level and # its formatter. '()': owned_file_handler, 'level':'DEBUG', 'formatter': 'default', # The values below are passed to the handler creator callable # as keyword arguments. 'owner': ['pulse', 'pulse'], 'filename': 'chowntest.log', 'mode': 'w', 'encoding': 'utf-8', }, }, 'root': { 'handlers': ['file'], 'level': 'DEBUG', }, } In this example I am setting the ownership using the pulse user and group, just for the purposes of illustration. Putting it together into a working script, chowntest.py: import logging, logging.config, os, shutil def owned_file_handler(filename, mode='a', encoding=None, owner=None): if owner: if not os.path.exists(filename): open(filename, 'a').close() shutil.chown(filename, *owner) return logging.FileHandler(filename, mode, encoding) LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'default': { 'format': '%(asctime)s%(levelname)s%(name)s%(message)s' }, }, 'handlers': { 'file':{ # The values below are popped from this dictionary and # used to create the handler, set the handler's level and # its formatter. '()': owned_file_handler, 'level':'DEBUG', 'formatter': 'default', # The values below are passed to the handler creator callable # as keyword arguments. 'owner': ['pulse', 'pulse'], 'filename': 'chowntest.log', 'mode': 'w', 'encoding': 'utf-8', }, }, 'root': { 'handlers': ['file'], 'level': 'DEBUG', }, } logging.config.dictConfig(LOGGING) logger = logging.getLogger('mylogger') logger.debug('A debug message') To run this, you will probably need to run as root: $ sudo python3.3 chowntest.py $ cat chowntest.log 2013-11-05 09:34:51,128 DEBUG mylogger A debug message $ ls -l chowntest.log -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log Note that this example uses Python 3.3 because that’s where shutil.chown() makes an appearance. This approach should work with any Python version that supports dictConfig() - namely, Python 2.7, 3.2 or later. With pre-3.3 versions, you would need to implement the actual ownership change using e.g. os.chown(). In practice, the handler-creating function may be in a utility module somewhere in your project. Instead of the line in the configuration: '()': owned_file_handler, you could use e.g.: '()': 'ext://project.util.owned_file_handler', where project.util can be replaced with the actual name of the package where the function resides. In the above working script, using 'ext://__main__.owned_file_handler' should work. Here, the actual callable is resolved by dictConfig() from the ext:// specification. This example hopefully also points the way to how you could implement other types of file change - e.g. setting specific POSIX permission bits - in the same way, using os.chmod(). Of course, the approach could also be extended to types of handler other than a FileHandler - for example, one of the rotating file handlers, or a different type of handler altogether. Using particular formatting styles throughout your application¶ In Python 3.2, the Formatter gained a style keyword parameter which, while defaulting to % for backward compatibility, allowed the specification of { or $ to support the formatting approaches supported by str.format() and string.Template. Note that this governs the formatting of logging messages for final output to logs, and is completely orthogonal to how an individual logging message is constructed. Logging calls (debug(), info() etc.) only take positional parameters for the actual logging message itself, with keyword parameters used only for determining options for how to handle the logging call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using str.format() or string.Template syntax, because internally the logging package uses %-formatting to merge the format string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. There have been suggestions to associate format styles with specific loggers, but that approach also runs into backward compatibility problems because any existing code could be using a given logger name and using %-formatting. For logging to work interoperably between any third-party libraries and your code, decisions about formatting need to be made at the level of the individual logging call. This opens up a couple of ways in which alternative formatting styles can be accommodated. Using LogRecord factories¶ In Python 3.2, along with the Formatter changes mentioned above, the logging package gained the ability to allow users to set their own LogRecord subclasses, using the setLogRecordFactory() function. You can use this to set your own subclass of LogRecord, which does the Right Thing by overriding the getMessage() method. The base class implementation of this method is where the msg % args formatting happens, and where you can substitute your alternate formatting; however, you should be careful to support all formatting styles and allow %-formatting as the default, to ensure interoperability with other code. Care should also be taken to call str(self.msg), just as the base implementation does. Refer to the reference documentation on setLogRecordFactory() and LogRecord for more information. Using custom message objects¶ There is another, perhaps simpler way that you can use {}- and $- formatting to construct your individual log messages. You may recall (from Using arbitrary objects as messages) that when logging you can use an arbitrary object as a message format string, and that the logging package will call str() on that object to get the actual format string. Consider the following two classes: class BraceMessage: def __init__(self, fmt, /, *args, **kwargs): self.fmt = fmt self.args = args self.kwargs = kwargs def __str__(self): return self.fmt.format(*self.args, **self.kwargs) class DollarMessage: def __init__(self, fmt, /, **kwargs): self.fmt = fmt self.kwargs = kwargs def __str__(self): from string import Template return Template(self.fmt).substitute(**self.kwargs) Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual “message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”. If you find it a little unwieldy to use the class names whenever you want to log something, you can make it more palatable if you use an alias such as M or _ for the message (or perhaps __, if you are using _ for localization). Examples of this approach are given below. Firstly, formatting with str.format(): >>> __ = BraceMessage >>> print(__('Message with {0}{1}', 2, 'placeholders')) Message with 2 placeholders >>> class Point: pass ... >>> p = Point() >>> p.x = 0.5 >>> p.y = 0.5 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p)) Message with coordinates: (0.50, 0.50) Secondly, formatting with string.Template: >>> __ = DollarMessage >>> print(__('Message with $num $what', num=2, what='placeholders')) Message with 2 placeholders >>> One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call to one of the XXXMessage classes shown above. Configuring filters with dictConfig()¶ You can configure filters using dictConfig(), though it might not be obvious at first glance how to do it (hence this recipe). Since Filter is the only filter class included in the standard library, and it is unlikely to cater to many requirements (it’s only there as a base class), you will typically need to define your own Filter subclass with an overridden filter() method. To do this, specify the () key in the configuration dictionary for the filter, specifying a callable which will be used to create the filter (a class is the most obvious, but you can provide any callable which returns a Filter instance). Here is a complete example: import logging import logging.config import sys class MyFilter(logging.Filter): def __init__(self, param=None): self.param = param def filter(self, record): if self.param is None: allow = True else: allow = self.param not in record.msg if allow: record.msg = 'changed: ' + record.msg return allow LOGGING = { 'version': 1, 'filters': { 'myfilter': { '()': MyFilter, 'param': 'noshow', } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'filters': ['myfilter'] } }, 'root': { 'level': 'DEBUG', 'handlers': ['console'] }, } if __name__ == '__main__': logging.config.dictConfig(LOGGING) logging.debug('hello') logging.debug('hello - noshow') This example shows how you can pass configuration data to the callable which constructs the instance, in the form of keyword parameters. When run, the above script will print: changed: hello which shows that the filter is working as configured. A couple of extra points to note: If you can’t refer to the callable directly in the configuration (e.g. if it lives in a different module, and you can’t import it directly where the configuration dictionary is), you can use the form ext://... as described in Access to external objects. For example, you could have used the text 'ext://__main__.MyFilter' instead of MyFilter in the above example. As well as for filters, this technique can also be used to configure custom handlers and formatters. See User-defined objects for more information on how logging supports using user-defined objects in its configuration, and see the other cookbook recipe Customizing handlers with dictConfig() above. Customized exception formatting¶ There might be times when you want to do customized exception formatting - for argument’s sake, let’s say you want exactly one line per logged event, even when exception information is present. You can do this with a custom formatter class, as shown in the following example: import logging class OneLineExceptionFormatter(logging.Formatter): def formatException(self, exc_info): """ Format an exception so that it prints on a single line. """ result = super().formatException(exc_info) return repr(result) # or format into one line however you want to def format(self, record): s = super().format(record) if record.exc_text: s = s.replace('\n', '') + '|' return s def configure_logging(): fh = logging.FileHandler('output.txt', 'w') f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|', '%d/%m/%Y %H:%M:%S') fh.setFormatter(f) root = logging.getLogger() root.setLevel(logging.DEBUG) root.addHandler(fh) def main(): configure_logging() logging.info('Sample message') try: x = 1 / 0 except ZeroDivisionError as e: logging.exception('ZeroDivisionError: %s', e) if __name__ == '__main__': main() When run, this produces a file with exactly two lines: 28/01/2015 07:21:23|INFO|Sample message| 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'| While the above treatment is simplistic, it points the way to how exception information can be formatted to your liking. The traceback module may be helpful for more specialized needs. Speaking logging messages¶ There might be situations when it is desirable to have logging messages rendered in an audible rather than a visible format. This is easy to do if you have text-to-speech (TTS) functionality available in your system, even if it doesn’t have a Python binding. Most TTS systems have a command line program you can run, and this can be invoked from a handler using subprocess. It’s assumed here that TTS command line programs won’t expect to interact with users or take a long time to complete, and that the frequency of logged messages will be not so high as to swamp the user with messages, and that it’s acceptable to have the messages spoken one at a time rather than concurrently, The example implementation below waits for one message to be spoken before the next is processed, and this might cause other handlers to be kept waiting. Here is a short example showing the approach, which assumes that the espeak TTS package is available: import logging import subprocess import sys class TTSHandler(logging.Handler): def emit(self, record): msg = self.format(record) # Speak slowly in a female English voice cmd = ['espeak', '-s150', '-ven+f3', msg] p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # wait for the program to finish p.communicate() def configure_logging(): h = TTSHandler() root = logging.getLogger() root.addHandler(h) # the default formatter just returns the message root.setLevel(logging.DEBUG) def main(): logging.info('Hello') logging.debug('Goodbye') if __name__ == '__main__': configure_logging() sys.exit(main()) When run, this script should say “Hello” and then “Goodbye” in a female voice. The above approach can, of course, be adapted to other TTS systems and even other systems altogether which can process messages via external programs run from a command line. Buffering logging messages and outputting them conditionally¶ There might be situations where you want to log messages in a temporary area and only output them if a certain condition occurs. For example, you may want to start logging debug events in a function, and if the function completes without errors, you don’t want to clutter the log with the collected debug information, but if there is an error, you want all the debug information to be output as well as the error. Here is an example which shows how you could do this using a decorator for your functions where you want logging to behave this way. It makes use of the logging.handlers.MemoryHandler, which allows buffering of logged events until some condition occurs, at which point the buffered events are flushed - passed to another handler (the target handler) for processing. By default, the MemoryHandler flushed when its buffer gets filled up or an event whose level is greater than or equal to a specified threshold is seen. You can use this recipe with a more specialised subclass of MemoryHandler if you want custom flushing behavior. The example script has a simple function, foo, which just cycles through all the logging levels, writing to sys.stderr to say what level it’s about to log at, and then actually logging a message at that level. You can pass a parameter to foo which, if true, will log at ERROR and CRITICAL levels - otherwise, it only logs at DEBUG, INFO and WARNING levels. The script just arranges to decorate foo with a decorator which will do the conditional logging that’s required. The decorator takes a logger as a parameter and attaches a memory handler for the duration of the call to the decorated function. The decorator can be additionally parameterised using a target handler, a level at which flushing should occur, and a capacity for the buffer (number of records buffered). These default to a StreamHandler which writes to sys.stderr, logging.ERROR and 100 respectively. Here’s the script: import logging from logging.handlers import MemoryHandler import sys logger = logging.getLogger(__name__) logger.addHandler(logging.NullHandler()) def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None): if target_handler is None: target_handler = logging.StreamHandler() if flush_level is None: flush_level = logging.ERROR if capacity is None: capacity = 100 handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler) def decorator(fn): def wrapper(*args, **kwargs): logger.addHandler(handler) try: return fn(*args, **kwargs) except Exception: logger.exception('call failed') raise finally: super(MemoryHandler, handler).flush() logger.removeHandler(handler) return wrapper return decorator def write_line(s): sys.stderr.write('%s\n' % s) def foo(fail=False): write_line('about to log at DEBUG ...') logger.debug('Actually logged at DEBUG') write_line('about to log at INFO ...') logger.info('Actually logged at INFO') write_line('about to log at WARNING ...') logger.warning('Actually logged at WARNING') if fail: write_line('about to log at ERROR ...') logger.error('Actually logged at ERROR') write_line('about to log at CRITICAL ...') logger.critical('Actually logged at CRITICAL') return fail decorated_foo = log_if_errors(logger)(foo) if __name__ == '__main__': logger.setLevel(logging.DEBUG) write_line('Calling undecorated foo with False') assert not foo(False) write_line('Calling undecorated foo with True') assert foo(True) write_line('Calling decorated foo with False') assert not decorated_foo(False) write_line('Calling decorated foo with True') assert decorated_foo(True) When this script is run, the following output should be observed: Calling undecorated foo with False about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... Calling undecorated foo with True about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... about to log at ERROR ... about to log at CRITICAL ... Calling decorated foo with False about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... Calling decorated foo with True about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... about to log at ERROR ... Actually logged at DEBUG Actually logged at INFO Actually logged at WARNING Actually logged at
8585
dbpedia
3
86
https://v2.nuxt.com/docs/get-started/installation/
en
Installation
https://v2.nuxt.com/preview.png
https://v2.nuxt.com/preview.png
[ "https://v2.nuxt.com/img/header/showcases.svg", "https://v2.nuxt.com/img/header/case-studies.svg", "https://v2.nuxt.com/img/header/testimonials.svg", "https://v2.nuxt.com/img/header/docs.svg", "https://v2.nuxt.com/img/header/examples.svg", "https://v2.nuxt.com/img/header/tutorials.svg", "https://v2.nuxt.com/img/header/master-courses.svg", "https://v2.nuxt.com/img/header/deployments.svg", "https://v2.nuxt.com/img/header/modules.svg", "https://v2.nuxt.com/img/header/themes.svg", "https://v2.nuxt.com/img/header/video-courses.svg", "https://v2.nuxt.com/img/header/announcements.svg", "https://v2.nuxt.com/img/header/teams.svg", "https://v2.nuxt.com/img/header/releases.svg", "https://v2.nuxt.com/img/header/sponsors.svg", "https://v2.nuxt.com/img/header/master-courses.svg", "https://v2.nuxt.com/img/header/master-courses.svg", "https://github.com/Atinux.png?size=32, https://github.com/Atinux.png?size=64 2x", "https://github.com/nazarepiedady.png?size=32, https://github.com/nazarepiedady.png?size=64 2x", "https://github.com/nobuaki0331.png?size=32, https://github.com/nobuaki0331.png?size=64 2x", "https://github.com/KawaneRio.png?size=32, https://github.com/KawaneRio.png?size=64 2x", "https://github.com/palmiak.png?size=32, https://github.com/palmiak.png?size=64 2x", "https://github.com/hacknug.png?size=32, https://github.com/hacknug.png?size=64 2x", "https://github.com/danielroe.png?size=32, https://github.com/danielroe.png?size=64 2x", "https://github.com/g1eny0ung.png?size=32, https://github.com/g1eny0ung.png?size=64 2x", "https://github.com/JeronimasDargis.png?size=32, https://github.com/JeronimasDargis.png?size=64 2x", "https://github.com/clemcode.png?size=32, https://github.com/clemcode.png?size=64 2x", "https://github.com/manniL.png?size=32, https://github.com/manniL.png?size=64 2x", "https://github.com/n3-rd.png?size=32, https://github.com/n3-rd.png?size=64 2x", "https://github.com/adrienZ.png?size=32, https://github.com/adrienZ.png?size=64 2x", "https://github.com/hicugi.png?size=32, https://github.com/hicugi.png?size=64 2x", "https://github.com/signalwerk.png?size=32, https://github.com/signalwerk.png?size=64 2x", "https://github.com/obulat.png?size=32, https://github.com/obulat.png?size=64 2x", "https://github.com/gabrielpaivadev.png?size=32, https://github.com/gabrielpaivadev.png?size=64 2x", "https://github.com/loilo.png?size=32, https://github.com/loilo.png?size=64 2x", "https://github.com/svedova.png?size=32, https://github.com/svedova.png?size=64 2x", "https://github.com/hijackmaniac.png?size=32, https://github.com/hijackmaniac.png?size=64 2x", "https://github.com/Viniciusadm.png?size=32, https://github.com/Viniciusadm.png?size=64 2x", "https://github.com/KareemDa.png?size=32, https://github.com/KareemDa.png?size=64 2x", "https://github.com/valentincostam.png?size=32, https://github.com/valentincostam.png?size=64 2x", "https://github.com/skiniks.png?size=32, https://github.com/skiniks.png?size=64 2x", "https://github.com/alhirzel.png?size=32, https://github.com/alhirzel.png?size=64 2x", "https://github.com/ajeetchaulagain.png?size=32, https://github.com/ajeetchaulagain.png?size=64 2x", "https://github.com/katerlouis.png?size=32, https://github.com/katerlouis.png?size=64 2x", "https://github.com/nicodevs.png?size=32, https://github.com/nicodevs.png?size=64 2x", "https://github.com/raimuhammad26.png?size=32, https://github.com/raimuhammad26.png?size=64 2x", "https://github.com/NaokiHamada.png?size=32, https://github.com/NaokiHamada.png?size=64 2x", "https://github.com/a-toms.png?size=32, https://github.com/a-toms.png?size=64 2x", "https://github.com/yannaufray.png?size=32, https://github.com/yannaufray.png?size=64 2x", "https://github.com/anthonychu.png?size=32, https://github.com/anthonychu.png?size=64 2x", "https://github.com/nuzhat-minhaz.png?size=32, https://github.com/nuzhat-minhaz.png?size=64 2x", "https://github.com/l-portet.png?size=32, https://github.com/l-portet.png?size=64 2x", "https://github.com/richardeschloss.png?size=32, https://github.com/richardeschloss.png?size=64 2x", "https://github.com/xanzhu.png?size=32, https://github.com/xanzhu.png?size=64 2x", "https://github.com/BenoitPotty.png?size=32, https://github.com/BenoitPotty.png?size=64 2x", "https://github.com/antony-k1208.png?size=32, https://github.com/antony-k1208.png?size=64 2x", "https://github.com/hibariya.png?size=32, https://github.com/hibariya.png?size=64 2x", "https://github.com/jose-seabra.png?size=32, https://github.com/jose-seabra.png?size=64 2x", "https://github.com/talentedunicorn.png?size=32, https://github.com/talentedunicorn.png?size=64 2x", "https://github.com/florian-lefebvre.png?size=32, https://github.com/florian-lefebvre.png?size=64 2x", "https://github.com/RecoX.png?size=32, https://github.com/RecoX.png?size=64 2x", "https://github.com/MrZyr0.png?size=32, https://github.com/MrZyr0.png?size=64 2x", "https://v2.nuxt.com/img/footer/dark/landscape.svg", "https://v2.nuxt.com/img/footer/light/landscape.svg" ]
[]
[]
[ "" ]
null
[]
null
Here, you will find information on setting up and running a Nuxt project in 4 steps.
en
/_nuxt/icons/icon_64x64.6dcbd4.png
Nuxt 2
null
Here, you will find information on setting up and running a Nuxt project in 4 steps. Online playground You can play with Nuxt online directly on CodeSandbox or StackBlitz: Play on CodeSandbox Play on StackBlitz Prerequisites node - We recommend you have either 16.x or 14.x installed. A text editor, we recommend VS Code with the Volar extension or WebStorm . A terminal, we recommend using VS Code's integrated terminal or WebStorm terminal . Using create-nuxt-app To get started quickly, you can use create-nuxt-app . Make sure you have installed yarn, npx (included by default with npm v5.2+) or npm (v6.1+). It will ask you some questions (name, Nuxt options, UI framework, TypeScript, linter, testing framework, etc). To find out more about all the options see the create-nuxt-app documentation . Once all questions are answered, it will install all the dependencies. The next step is to navigate to the project folder and launch it: The application is now running on http://localhost:3000 . Well done! Manual Installation Creating a Nuxt project from scratch only requires one file and one directory. We will use the terminal to create the directories and files, feel free to create them using your editor of choice. Set up your project Create an empty directory with the name of your project and navigate into it: Replace <project-name> with the name of your project. Create the package.json file: Fill the content of your package.json with: scripts define Nuxt commands that will be launched with the command npm run <command> or yarn <command>. What is a package.json file? The package.json is like the ID card of your project. It contains all the project dependencies and much more. If you don't know what the package.json file is, we highly recommend you to have a quick read on the npm documentation . Install Nuxt Once the package.json has been created, add nuxt to your project via npm or yarn like so below: This command will add nuxt as a dependency to your project and add it to your package.json. The node_modules directory will also be created which is where all your installed packages and dependencies are stored. Create your first page Nuxt transforms every *.vue file inside the pages directory as a route for the application. Create the pages directory in your project: Then, create an index.vue file in the pages directory: It is important that this page is called index.vue as this will be the default home page Nuxt shows when the application opens. Open the index.vue file in your editor and add the following content: Start the project Run your project by typing one of the following commands below in your terminal: The application is now running on http://localhost:3000 . Open it in your browser by clicking the link in your terminal and you should see the text "Hello World" we copied in the previous step. Bonus step Create a page named fun.vue in the pages directory. Add a <template></template> and include a heading with a funny sentence inside.
8585
dbpedia
3
90
https://www.geico.com/information/aboutinsurance/auto/
en
Car Insurance Coverage: Auto Coverage Types & More
[ "https://www.geico.com/public/images/informationcenter/family-in-back-of-car.jpg", "https://www.geico.com/public/images/gecko-half.png", "https://www.geico.com/includes/livechat/images/geico-virtual-greeting.png", "https://www.geico.com/includes/livechat/images/gabby-speechbubble.png", "https://www.geico.com/includes/livechat/images/gabby-face.png" ]
[]
[]
[ "" ]
null
[]
null
Learn more about car insurance coverage. We'll help answer things like: What is car insurance? What types of coverages are available with your auto insurance policy?
en
/favicon.ico
https://www.geico.com/information/aboutinsurance/auto/
With so many different coverages, it's hard to know what's right for you. Understanding the different types of coverages can help you find the right car insurance policy for your needs and budget. Learn about types of auto insurance coverage with GEICO. What is car insurance? Car insurance helps provide financial protection for you, your family, other passengers, and your vehicle. You can choose the amount of protection by selecting your coverages and limits. An auto insurance policy can provide coverage for: Accidents Liability Medical expenses Property Vehicles Understanding Your Auto Insurance Your auto insurance policy consists of multiple coverages that provide protection in different situations involving your vehicle. Coverages have different limits and may have deductibles. Specific coverages and limits may also be required by a lienholder or lender. To better understand your policy, there are several insurance terms you should know: Coverage: protection and benefits provided to you Limits: maximum amount of protection for a specific coverage Policy: contract between you and your insurance company Premium: price of your insurance policy Deductible: portion you pay out of pocket, if you file a claim When you compare car insurance rates, make sure they're for the same coverages, deductibles, and limits. Shopping for car insurance is easy and fast with GEICO. Learn more about how auto insurance rates are determined. Types of Car Insurance Coverage First, let us clarify that there's no such thing as "full coverage." Some people may say "full coverage" means the minimum liability coverages for their state, comprehensive coverage and collision coverage. But it could also mean something else. Ultimately, you should choose the coverages that are right for you. "Full coverage" is an insurance myth. What are the different types of coverages? Auto insurance includes liability coverages, vehicle coverages, coverages for yourself, and other optional coverages.* Liability Coverages Liability Coverages include: Bodily Injury liability: pays damages for bodily injury or death resulting from an accident for which you are at fault Property Damage liability: pays for damage to someone else's property resulting from an accident for which you are at fault For serious accidents, you want enough insurance to cover a judgment against you in a lawsuit without jeopardizing your personal assets. Therefore, it's a good idea to have the same level of bodily injury coverage for all your cars. You may also want to consider an umbrella policy which provides additional coverage for more serious accidents and lawsuits. Medical Coverages Medical Payments coverage: may pay medical expenses related to a car accident Personal Injury Protection coverage: may pay for your medical treatment, lost wages, or other accident-related expenses regardless of who caused the accident Uninsured Motorist Coverages Uninsured Motorist coverage: may help compensate you for your injuries or property damage caused by a driver without insurance Underinsured Motorist coverage: can protect you from at-fault drivers with insufficient insurance coverage to pay your claim Vehicle Coverages Collision coverage: may pay for damage to your car when it hits, or is hit by, another vehicle or other object Comprehensive coverage: may pay for damage to your car from theft, vandalism, flood, fire or other covered losses You may be able to lower your premium if you select a higher deductible. If you have an older vehicle you may want to consider whether you need these coverages as they are normally limited to the cash value of your car. Additional Auto Insurance Coverages Emergency Road Service Rental Reimbursement Mechanical Breakdown Insurance There's a lot to learn about auto insurance coverages. If you have questions, contact our licensed insurance agents at (800) 861-8380. How much car insurance do I need? The right amount of auto insurance coverage comes down to your needs, budget, and state requirements. Some things to keep in mind when choosing your coverage are: Which coverages and limits does my state require for all drivers? Does my lienholder have coverage requirements? Do I need to protect any other assets, such as my home? How much can I afford to pay out of pocket at any given time? What is my vehicle worth now? We recommend checking your state insurance minimums and reviewing all coverage options to make sure you have the right coverage. Our Coverage Calculator can help you find the right amount of coverage and the right deductible to fit your budget. Why should I buy my auto insurance from GEICO? Five reasons to buy an auto insurance policy from GEICO:
8585
dbpedia
0
1
https://stackoverflow.com/questions/46419607/how-to-automatically-install-required-packages-from-a-python-script-as-necessary
en
How to automatically install required packages from a Python script as necessary?
https://cdn.sstatic.net/…g?v=73d79a89bded
https://cdn.sstatic.net/…g?v=73d79a89bded
[ "https://i.sstatic.net/d4HTC.jpg?s=64", "https://lh3.googleusercontent.com/-NzgIGIiK2VM/AAAAAAAAAAI/AAAAAAAAEW4/2cUX2QuzpAw/photo.jpg?sz=64", "https://i.sstatic.net/Z99mk.jpg?s=64", "https://i.sstatic.net/czF1r.gif?s=64", "https://i.sstatic.net/i7iLl.jpg?s=64", "https://i.sstatic.net/6JFOF.png?s=64", "https://i.sstatic.net/WWXSU.png?s=64", "https://i.sstatic.net/1reMo.jpg?s=64", "https://stackoverflow.com/posts/46419607/ivc/3f38?prg=d4e8fe01-2ecf-4ced-9fc2-bb45cc7b2761", "https://stackoverflow.com/js-false.gif" ]
[]
[]
[ "" ]
null
[]
2017-09-26T06:52:21
Is there anything in Python or Linux what basically instructs the system to &quot;install whatever is necessary&quot;. Basically I find it annoying to install python packages for each new script/sy...
en
https://cdn.sstatic.net/Sites/stackoverflow/Img/favicon.ico?v=ec617d715196
Stack Overflow
https://stackoverflow.com/questions/46419607/how-to-automatically-install-required-packages-from-a-python-script-as-necessary
Let's assume that your Python script is example.py: import os import time import sys import fnmatch import requests import urllib.request from bs4 import BeautifulSoup from multiprocessing.dummy import Pool as ThreadPool print('test') You can use pipreqs to automatically generate a requirements.txt file based on the import statements that the Python script(s) contain. To use pipreqs, assuming that you are in the directory where example.py is located: pip install pipreqs pipreqs . It will generate the following requirements.txt file: requests==2.23.0 beautifulsoup4==4.9.1 which you can install with: pip install -r requirements.txt You can use setuptools to install dependencies automatically when you install your custom project on a new machine. Requirements file works just fine if all you want to do is to install a few PyPI packages. Here is a nice comparison between the two. From the same link you can see that if your project has two dependent packages A and B, all you have to include in your setp.py file is a line install_requires=[ 'A', 'B' ] Of course, setuptools can do much more. You can include setups for external libraries (say C files), non PyPI dependencies, etc. The documentation gives a detailed overview on installing dependencies. There is also a really good tutorial on getting started with python packaging. From their example, a typical setup.py file would look like this. from setuptools import setup setup(name='funniest', version='0.1', description='The funniest joke in the world', url='http://github.com/storborg/funniest', author='Flying Circus', author_email='[email protected]', license='MIT', packages=['funniest'], install_requires=[ 'markdown', ], zip_safe=False) In conclusion, it is so simple to get started with setuptools. This package can make it fairly easy to migrate your code to a new machine. Automatic requirements.txt updating approach I'm not really sure about auto installing what is necessary, but it you stop on using requirements.txt, there are 3 approaches: Generate requirements.txt after development, when we want to deploy it. It is performed by pip freeze > requirements.txt or pipreqs for less messy result. Add every module to requirements.txt manually after each install. Install manager that will handle requirements.txt updates for us. There are many answers for the 1-st option on stackoverflow, the 2-d option is self-explanatory, so I would like to describe the 3-d approach. There is a library called to-requirements.txt. To install it type this: pip install to-requirements.txt # Pip install to requirements.txt If you read the whole command at once you would see, what it does. After installing you should setup it. Run: requirements-txt setup It overrides the pip scripts so that each pip install or pip uninstall updates the requirements.txt file of your project automatically with required versions of packages. The overriding is made safely, so that after uninstalling this package the pip will behave ordinary. And you could customize the way it works. For example, disable it globally and activate it only for the required directories, activate it only for git repositories, or allow / disallow to create requirements.txt file if it does not exist. Links: Documentation - https://requirements-txt.readthedocs.io/en/latest/ GitHub - https://github.com/VoIlAlex/requirements-txt PyPI - https://pypi.org/project/to-requirements.txt/
8585
dbpedia
3
28
https://stackoverflow.com/questions/64088072/why-does-vs-code-not-auto-import-packages-using-go
en
Why does VS Code not auto-import packages using Go?
https://cdn.sstatic.net/…g?v=73d79a89bded
https://cdn.sstatic.net/…g?v=73d79a89bded
[ "https://i.sstatic.net/PILz2.png", "https://i.sstatic.net/JNrsh.png", "https://i.sstatic.net/zv6rC.jpg?s=64", "https://i.sstatic.net/mjZMX.jpg?s=64", "https://i.sstatic.net/5DNhQ.png", "https://i.sstatic.net/DPw8X.png", "https://i.sstatic.net/elRlo.jpg?s=64", "https://i.sstatic.net/YeY8s.png", "https://i.sstatic.net/ARkCc.png?s=64", "https://www.gravatar.com/avatar/74b76303e93d3f50be927f55b3363cfb?s=64&d=identicon&r=PG&f=y&so-version=2", "https://i.sstatic.net/cBKAP.jpg?s=64", "https://i.sstatic.net/BlhDI.png", "https://i.sstatic.net/zv6rC.jpg?s=64", "https://stackoverflow.com/posts/64088072/ivc/3f38?prg=fe9d5b43-2f8e-483d-95c6-3c9eb9f6a25c" ]
[]
[]
[ "" ]
null
[]
2020-09-27T11:43:54
Hi I am new to Go and currently use VS Code as IDE. I am totally new to back-end development and I am trying to use Go for the job. While I was teaching myself via Youtube, I faced a problem. The p...
en
https://cdn.sstatic.net/Sites/stackoverflow/Img/favicon.ico?v=ec617d715196
Stack Overflow
https://stackoverflow.com/questions/64088072/why-does-vs-code-not-auto-import-packages-using-go
Hi I am new to Go and currently use VS Code as IDE. I am totally new to back-end development and I am trying to use Go for the job. While I was teaching myself via Youtube, I faced a problem. The problem is that the VS Code does not auto-import any package made by me. I don't know why but I did get some clues about it. My Guess My editor does not recognize the location of the package also my projects are located at C:\Users\John\Desktop\GoProjects while the gopath=C:\Users\John\go and the goroot=c:\go Can anyone give me a solution to this?
8585
dbpedia
3
53
https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unidentified-developer-mh40616/mac
en
Open a Mac app from an unidentified developer
[ "https://help.apple.com/assets/65A8106E7C69B635140E606E/65A81072C0272B1FFA02DE51/en_US/f9979df145e31ea9fb18995403d2b2f6.png", "https://help.apple.com/assets/65A8106E7C69B635140E606E/65A81072C0272B1FFA02DE51/en_US/058e4af8e726290f491044219d2eee73.png", "https://help.apple.com/assets/65A8106E7C69B635140E606E/65A81072C0272B1FFA02DE51/en_US/2f77cc85238452e25cb517130188bf99.png", "https://help.apple.com/assets/65A8106E7C69B635140E606E/65A81072C0272B1FFA02DE51/en_US/f9979df145e31ea9fb18995403d2b2f6.png" ]
[]
[]
[ "" ]
null
[]
null
If you try to open an app by an unidentified developer and you see a warning dialog, you can override your security settings to open it.
en
Apple Support
https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unidentified-developer-mh40616/mac
If you try to open an app that isn’t registered with Apple by an identified developer, you get a warning dialog. This doesn’t necessarily mean that something’s wrong with the app. For example, some apps were written before developer ID registration began. However, the app has not been reviewed, and macOS can’t check whether the app has been modified or broken since it was released. A common way to distribute malware is to take an app and insert harmful code into it, and then redistribute the infected app. So an app that isn’t registered by an unidentified developer might contain harmful code. The safest approach is to look for a later version of the app from the Mac App Store or look for an alternative app. To override your security settings and open the app anyway, follow these steps:
8585
dbpedia
1
8
https://groups.google.com/g/munki-discuss/c/xapas_bA9LE
en
Pkginfo for Managed-python3
https://www.gstatic.com/…/groups_32dp.png
https://www.gstatic.com/…/groups_32dp.png
[ "https://fonts.gstatic.com/s/i/productlogos/groups/v9/web-48dp/logo_groups_color_1x_web_48dp.png", "https://lh3.googleusercontent.com/a-/ALV-UjXGP7P3-o_MudMMs8fhL5c51FWcPmcxcVLTpN9iOTwRS0vVeufK=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjVl-SzPNhQmH5w6byDkdpQwpsIuKRPiqNrxUYoAzT1LC5c_qbqO=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjWDrbAFe2Cjo21BkIuXIipKhl52POgVoKkWMuo59-gyifCWBls=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjXGP7P3-o_MudMMs8fhL5c51FWcPmcxcVLTpN9iOTwRS0vVeufK=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjU0RKaUSOReYWOMz8pKdoOdxxjni0foHASCV9Kikghj_Bi81T87vQ=s40-c" ]
[]
[]
[ "" ]
null
[]
null
en
//www.gstatic.com/images/branding/product/1x/groups_32dp.png
https://groups.google.com/g/munki-discuss/c/xapas_bA9LE
* Processing manifest item Managed-Python3 for update Looking for detail for: Managed-Python3, version latest... Considering 1 items with name Managed-Python3 from catalog testing Considering item Managed-Python3, version 3.10.2.80694 with minimum os version required 10.5.0 Our OS version is 11.7 Found Managed-Python3, version 3.10.2.80694 in catalog testing * Processing manifest item Managed-Python3 for install Looking for detail for: Managed-Python3, version latest... Considering 1 items with name Managed-Python3 from catalog testing Considering item Managed-Python3, version 3.10.2.80694 with minimum os version required 10.5.0 Our OS version is 11.7 Found Managed-Python3, version 3.10.2.80694 in catalog testing Managed-Python3 version 3.10.2.80694 (or newer) is already installed. Looking for updates for: Managed-Python3 Looking for updates for: Managed-Python3-3.10.2.80694 Looking for updates for: Managed-Python3--3.10.2.80694 This is as I described. Managed Software Centre has the install button greyed out underneath it says 'Installed'. I just added it to a manifest for a second Mac also running macOS 12.6 and Munki and it shows the same result. :( This second Mac has absolutely never run this particular installer pkg. The second Mac whilst Managed Software Centre says the same thing shows fewer entries in the log output in Terminal.
8585
dbpedia
0
10
https://maclabs.jazzace.ca/2019/09/14/core-or-custom-autopkg-processors.html
en
Anthony’s Mac Labs Blog
[]
[]
[]
[ "" ]
null
[]
2019-09-14T00:00:00
Anthony’s Mac Labs Blog : Anthony Reimer’s blog for Mac Admins that shares what he’s learned recently
null
Posted 2019 September 14 The core AutoPkg processors pack a lot of punch. Everyone who uses AutoPkg depends on them. But sometimes you need something more — or, if you know how to write Python, you can see a much easier and/or elegant solution if you just write some code. The Microsoft Office sets of recipes, which I have written about in many previous posts, provide examples of how you can do the same task with a custom processor or without. This post will look at downloading and gathering version information using both methods. As in previous Office recipe posts, I will refer to three different recipe solutions: The core AutoPkg recipes [github.com/autopkg/recipes in the MSOfficeUpdates folder]; Rich Trouton’s recipes [github.com/autopkg/rtrouton-recipes in product-specific folders whose name starts with “Microsoft” or “Office”, with child recipes for Munki from Ben Toms in the datajar-recipes repo], excluding the recipes for the full Office Suite; The “SKUless” recipes in Allister Banks’ personal (non-project) GitHub repo [github.com/arubdesu/office-recipes]. In previous articles, I referred to these as the core, rtrouton, and arubdesu recipes respectively, so I will continue that usage here. However, you may notice that I have added significant qualifiers this time when I specified which recipes are being referenced, primarily to keep the discussion tidier. I am only referring to the SKUless recipes in the arubdesu/office-recipes (the ones designed to download the entire suite) because they offer an approach that is different than the other two major recipe sets and thus are useful for study. (The remainder of that repo is a smörgåsbord of different techniques, sharing code in some cases with the core recipes.) Conversely, I’ve taken the rtrouton Office Suite recipes out of the mix because they are essentially rebranded arubdesu SKUless recipes; the product-specific recipes have a unified approach that is different than the Suite and the core recipes. So with those qualifiers out of the way, let’s look at how each set downloads the desired Office apps. Downloading and Processors The most common workflow we would see in any download recipe is: Determine the URI of the item we want to download; Download a copy of the item (if we don’t have the current version in hand already); Check the code signature of the downloaded item. The aforementioned recipes for Office all conform to that. They also do the right thing by inserting an EndOfCheckPhase processor in-between Steps 2 and 3 to properly handle running the recipe with the -c or --check option. The difference is that the core AutoPkg recipes use a custom-built processor to determine the download, and the arubdesu and rtrouton recipes use a core AutoPkg processor. So what are some of the advantages and disadvantages to using a custom processor over core processors (and vice versa)? Custom Processor Core Processors Advantages Can deal with complex/unique download and/or versioning situations Customized for that product Can be coded to use human-friendly Input values Can be more efficient Allows addition of features not currently covered by core processors Processors have already been vetted by hundreds of users Processors are well-documented (including changes) and perform common tasks Recipe author does not need to be able to code in Python Easier to audit recipes for trust (especially if you don’t know Python) Disadvantages Requires knowledge of Python to write a custom processor Requires knowledge of Python or a good testing scheme to audit a custom processor for trust If you can’t write Python and the custom processor requires an update, you have to wait for someone else to do it Limited by what existing processors can do May require extra steps to do the same thing as a custom processor does (if possible at all) Often less efficient, code-wise (if you care about such things) Sidebar: While not applicable in this case, there is another variety of processor called a Shared Processor. It is a custom processor (usually general-purpose in nature) that is not part of the Core processors but is posted in GitHub and meant to be shared amongst recipes. Its advantages and disadvantages sit between Core and Custom. For more information on Shared Processors, see the AutoPkg wiki. In this case, the reason these recipes selected a custom processor over a core processor or vice versa boiled down to the source used to determine the location of the desired download. What’s Your Source? When writing AutoPkg recipes, we want to get as an authoritative source as possible for our downloads (and versioning, for that matter). If the application has an updating mechanism built in, our recipes are less likely to break if we use the same data source as that mechanism. This explains the presence of the GitHubReleasesInfoProvider and SparkleUpdateInfoProvider in the core processors. Both of those parse an update feed which will provide appropriate download links and version information for downloads hosted by GitHub or managed by Sparkle respectively. Microsoft rolls their own update mechanism: Microsoft AutoUpdate (MAU). The core recipes figured out how to parse the feed that MAU uses in order to download the software requested by the user — definitely an authoritative source. Using this feed gave the authors a lot of flexibility in supporting test builds such as Insider Slow and Insider Fast. Basically, as long as the processor authors were willing to write the code to support selecting those options via Input variables, users could access them with AutoPkg. This accounts for the large number of lines of code in the core recipes’ processor. This also gives the recipe user the most straightforward usage: they can use a combination of meaningful words like “Production”, “latest”, and “Excel2019” as input values to direct what to download. While the original Office 2011 recipes focussed on updaters (expecting that you would be manually downloading the full installer from your volume license portal and deploying that first), the current set of recipes supports downloading full installers for the most common individual apps. (A full chart is available in my May 2019 post.) The rtrouton and arubdesu recipes use a different source, but arguably just as authoritative. Microsoft has assigned a number to each product in its arsenal (called an FWLink), such that if you type https://go.microsoft.com/fwlink/?linkid= and then the appropriate 6- or 7-digit FWLink number into your browser, it will download the installer for the most current version of the appropriate product.[1] The rtrouton and arubdesu recipes leverage this, and can therefore use the core URLDownloader processor. This methodology came in handy during the transition to Office 365/2019, when new FWLink numbers came into existence and the numbers you had been using may or may not have been pointing to the variant (2016 or 365/2019) that you needed or expected. With the arubdesu SKUless recipes, you could just change one input key in your override to download the correct product. In contrast, the core recipes required code changes to the custom processor. To summarize, here’s how each recipe set obtains their download: Download Collects Via Source core Custom processor Microsoft AutoUpdate XML rtrouton Core processor Microsoft FWLink arubdesu Core processor Microsoft FWLink Versioning The next thing to look at is obtaining version information for your download. There is a bit of a difference of opinion in the community about where in the recipe chain you should collect such information. From a purely philosophical point of view, it has been my position that download recipes should just do the steps I outlined earlier, and the AutoPkg documentation generally supports this stance: download recipes download, pkg recipes package, etcetera. Since most pkg recipes add version information to the package name, it is common to collect that information in the pkg recipe. But if you use a management system like Munki that can install items using formats other than packages (e.g., from an app inside a disk image), a pkg recipe may not be necessary. In those cases, collecting version information inside the download recipe seems sensible. It’s because of this that I have softened my stance on this issue, since one of the real powers of AutoPkg is feeding your management system. Collecting version information in a download recipe may add inefficiency, but it’s one less thing other users have to worry about when writing a child recipe for their management system. Regardless, you will see version information being collected in both download and pkg recipes out in the wild. Let’s look at how the three sets of Office recipes we are examining collect version information: Versioning Collects Via Source Format Recipe core Custom processor Microsoft AutoUpdate XML 16.x.build download rtrouton Core processors pkg contents 16.x.build pkg arubdesu Custom processor macadmins.software XML 16.x.x download Microsoft provides their downloads in pkg format[2] — not even wrapped in a disk image — and these do not have application version information available to be easily parsed (e.g., by the Versioner processor). So we either need another source or we have to do some spelunking. In the case of the core recipes, the MAU XML file that provided the download link also has a version number field, so the custom processor picks that information up along the way — that’s a sensible, efficient way to do it. The other two recipe sets do not parse that XML file, so they need another method. The arubdesu recipes chose to write a custom processor whose sole raison d’être is to collect the version information. It parses a different XML file, manually being maintained by Paul Bowden of Microsoft, that gives the simpler 16.x.x version number, and since Microsoft doesn’t do silly things like have more than one release of a point update (like a particularly fruit company with their OS updates), this value should also work well with management systems. The main objection I’ve heard to the use of this source for version numbers is that it is manually (not automatically) generated. That means it could be out of sync with the actual package you are downloading. In both the core recipes and arubdesu download recipes, gathering the version number via a custom processor allows those recipes to name the package with the version number included.[3] This is why the arubdesu download recipe gathers the version information before actually downloading the installer. For the core recipes, both those functions are within the same processor, so from a user perspective they happen simultaneously. The rtrouton recipes take another common approach: examine the download and get the version information from there. As long as the vendor hasn’t done something stupid with version numbers (by commission or omission), this is the most authoritative source. In the case of the main Office suite apps (Word, Excel, PowerPoint, etc.), you have to dig down a fair distance into the installer to get the version information, but it is there and it is in a repeatable, specific location. And what else is AutoPkg for if not to automate repetitive tasks? Rich cleverly figured out a way to extract that information using just the core processors. As an example, let’s take a look at his steps to download Microsoft Excel 365 and the processors he used: Step Recipe Processor Notes 1 download URLDownloader download pkg installer; name it Microsoft_Excel.pkg by default 2 EndOfCheckPhase included for those using the --check option 3 CodeSignatureVerifier verifies code signature of download 4 pkg FlatPkgUnpacker unpacks the installer into downloads/unpack directory inside the recipe cache 5 FileFinder find the filename of the pkg installer that has the Excel app inside of it 6 PkgPayloadUnpacker unpack the payload of the pkg installer located in the previous step into downloads/payload 7 Versioner extract the version information from the Excel app revealed by the previous processor (16.x.buildnumber format)[4] 8 PkgCopier copy the pkg originally downloaded, renaming it with the version number appended 9 PathDeleter delete the originally-downloaded pkg and all the unpacked versions, leaving just the renamed pkg The split between .download and .pkg recipes makes great sense here. The download recipe does fetch a pkg, but it is not in the desired format for Rich’s management system. So if you don’t need version information, you could use his download recipe as a parent. If you do, the pkg recipe can be your parent. And since the pkg recipes only use core processors, you don’t have to write any Python. Take Your Pick
8585
dbpedia
3
44
https://medium.com/backend-habit/setting-golang-plugin-on-vscode-for-autocomplete-and-auto-import-30bf5c58138a
en
Setting Golang Plugin on VSCode for Autocomplete and Auto-import
https://miro.medium.com/v2/da:true/resize:fit:1200/0*fjn45vVnUe2wagRH
https://miro.medium.com/v2/da:true/resize:fit:1200/0*fjn45vVnUe2wagRH
[ "https://miro.medium.com/v2/resize:fill:64:64/1*dmbNkD5D-u45r44go_cf0g.png", "https://miro.medium.com/v2/resize:fill:88:88/1*yAKKM40ML5P-tBEg7zzy6g.jpeg", "https://miro.medium.com/v2/resize:fill:48:48/1*2D1e0M3WR9kt76fCeimuOg.png", "https://miro.medium.com/v2/resize:fill:144:144/1*yAKKM40ML5P-tBEg7zzy6g.jpeg", "https://miro.medium.com/v2/resize:fill:64:64/1*2D1e0M3WR9kt76fCeimuOg.png" ]
[]
[]
[ "" ]
null
[ "Teten Nugraha", "medium.com", "@teten.nugraha" ]
2020-05-13T09:22:39.843000+00:00
Okay, long time no see. On this article I’ll share about go language installation and plugin using visual studio so that you can use Auto-Completion and Auto-import Golang Plugin. Install golang as…
en
https://miro.medium.com/v2/5d8de952517e8160e40ef9841c781cdc14a5db313057fa3c3de41c6f5b494b19
Medium
https://medium.com/backend-habit/setting-golang-plugin-on-vscode-for-autocomplete-and-auto-import-30bf5c58138a
Okay, long time no see. On this article I’ll share about go language installation and plugin using visual studio so that you can use Auto-Completion and Auto-import Golang Plugin. Step 1 Go to golang web and download installer based on your system operation. Step 2 Install golang as usual, and setting Gopath manually first. I make go-wokspace folder on C:\Users\{user}\Documents which is we can use as GOPATH. After that you add on user variables like this Step 3 Under go-workspace folder, create three folder called bin, pkg, and src Step 4 Open you terminal and type go env Step 5 Open Visual Studio and search Golang Plugin and then install it Step 6 After that, still on VsCode click View -> Command Pallete or type Ctrl+Shift+P and type goinstall update/tools. check all dependencies and click OK. it will take time to download all dependencies. Step 7 Add some custom configuration on User Setting on settings.json, and this script and then restart vscode make simple project, and now you can use the plugins features.
8585
dbpedia
2
2
https://docs.veracode.com/r/About_auto_packaging
en
About auto-packaging
https://docs.veracode.co…code-favicon.png
https://docs.veracode.co…code-favicon.png
[ "https://docs.veracode.com/img/Veracode_Docs_Logo_Light_Mode.svg", "https://docs.veracode.com/img/Veracode_Docs_Logo_Dark_Mode.svg" ]
[]
[]
[ "" ]
null
[]
2024-08-08T19:40:14+00:00
Veracode auto-packaging automates the process of packaging your projects for Static Analysis and Software Composition Analysis (SCA) upload and scan. By automating packaging, you can reduce the burden on your teams to correctly package projects manually, while also ensuring more accurate and consistent scan results.
en
/img/veracode-favicon.png
https://docs.veracode.com/r/About_auto_packaging
Veracode auto-packaging automates the process of packaging your projects for Static Analysis and Software Composition Analysis (SCA) upload and scan. By automating packaging, you can reduce the burden on your teams to correctly package projects manually, while also ensuring more accurate and consistent scan results. Saves time and effort, compared to manual packaging, by eliminating manual steps, such as gathering files and dependencies, configuring build settings, and packaging artifacts. Ensures a consistent build process across different environments and platforms. This reduces the risk of discrepancies or errors that can occur when developers manually change the build configurations or there are variations across the configurations. Reduces human errors that can occur when developers package projects manually. This improves the accuracy and reliability of the generated artifacts, which ensures that the Static Analysis results are accurate. Enables scalability by facilitating the rapid and efficient generation of artifacts for analysis across multiple code repositories, projects, or teams. This scalability is essential for organizations managing large and complex codebases. Reduces the time and resources developers spend securing their code, which allows them to focus on writing new code, implementing features, or addressing critical issues. Developers can increase their productivity and accelerate the time-to-market for software products and updates. The auto-packager runs on your repository to package your projects into artifacts (archive files) that you can upload to the Veracode Platform. To correctly package a project for Static Analysis or SCA upload and scan, the auto-packager automatically detects the required components and configurations for each supported language. The auto-packager packages your projects into archive files, such as ZIP, JAR, WAR or EAR, called artifacts. During the packaging process, the auto-packager might create multiple artifacts that it includes in the final artifacts. For example, multiple DLL files inside the final ZIP file. The final artifacts are the complete, packaged archive files that you can upload to Veracode and scan separately. The following table lists examples of the filename format of the final artifacts for each supported language. Artifact languageLanguage tagLanguage suffix tagExample filename.NET assembliesdotnetNoneveracode-auto-pack-Web-dotnet.zip.NET with JavaScriptdotnetjsveracode-auto-pack-Web-dotnet-js.zipAndroidNoneNoneThe gradle.build file defines the filenames of Java artifacts.COBOLcobolNoneveracode-auto-pack-EnterpriseCOBOLv6.3-cobol.zipC/C++ Linuxc_cppNoneveracode-auto-pack-CppProjectLibsAndExecutables-c_cpp.zipC/C++ WindowsmsvcNoneveracode-auto-pack-$(SolutionName)-msvc.zipDart and FlutterNoneNoneThe project configuration for Flutter Android or Xcode defines the filenames.GogoNoneveracode-auto-pack-evil-app-go.zipiOS with Xarchiveiosxcarchiveveracode-auto-pack-duckduckgo-ios-xcarchive.zipiOS with CocoaPodsiospodfileveracode-auto-pack-signal-ios-podfile.zipJava with GradleNoneNoneDefined by your gradle.build file.Java with MavenNoneNoneDefined by your pom.xml file.JavaScriptjsNoneveracode-auto-pack-NodeGoat-js.zipKotlinNoneNoneThe filenames of Java artifacts are defined by your gradle.build file.PerlperlNoneveracode-auto-pack-bugzilla-perl.zipPHPphpNoneveracode-auto-pack-captainhook-php.zipPythonpythonNoneveracode-auto-pack-dvsa-python.zipReact NativejsNoneveracode-auto-pack-convene-js.zipRubyrubyNoneveracode-auto-pack-railsgoat-ruby.zipScalaNoneNoneThe filenames of Java artifacts are defined by your SBT build properties. Auto-packaging is integrated with the following products: Veracode CLI to integrate auto-packaging in your development environment. Veracode GitHub Workflow Integration to automate repo scanning with GitHub Actions. The auto-packager only supports Java, JavaScript, Python, Go, Scala, Kotlin, React Native, and Android repositories. Veracode Azure DevOps Workflow Integration to automate repo scanning using user's pipelines. The auto-packager supports Java, .NET, JavaScript, Python, Go, Kotlin, and React Native projects. Veracode Scan for JetBrains to auto-package applications, scan, and remediate findings in JetBrains IDEs. Veracode Scan for VS Code to auto-package applications, scan, and remediate findings in VS Code. You can integrate the auto-packager with your local build environment or CI/CD. For example, to add auto-packaging to your build pipelines, you could add the CLI command veracode package to your development toolchains or build scripts. You might need to install one or more of the following tools in your environment: A build automation tool that defines build scripts or configurations that specify how to manage dependencies, compile source code, and package code as artifacts. A dependency management system to effectively handle project dependencies. A compiler that builds source code into executable code. If the auto-packager does not support specific versions, or it relies on a version supported by your packager manager, the Versions column shows Not applicable. LanguageVersionsPackage managers.NET.NET 6, 7, or 8. .NET Framework 4.6 - 4.8. Not supported: MAUIAllAndroidA JDK version that you have tested to build your project.GradleCOBOLCOBOL-74, COBOL-85, COBOL-2002Not ApplicableC/C++ LinuxCentOS and Red Hat Enterprise 5-9, openSUSE 10-15Not ApplicableC/C++ WindowsC/C++ (32-bit/64-bit)Not ApplicableDart and FlutterDart 3.3 and earlier / Flutter 3.19 and earlierPubGo1.14 - 1.22Go ModulesiOSNot applicableAllJava (select from the Package managers column)A JDK version that you have tested to build your project.Gradle, MavenJavaScript and TypeScriptNot applicableNPM, YarnKotlinA JDK version that you have tested to build your project.Gradle, MavenPerl5.xNot ApplicablePHPNot applicableComposerPythonNot applicablePip, Pipenv, setuptools, virtualenvReact NativeNot applicableNPM, Yarn, BowerRuby on RailsRuby 2.4 or greaterBundlerScalaA JDK version that you have tested to build your project.Gradle, Maven, sbt Under each supported language, the Veracode CLI commands and output examples demonstrate the packaging process when you run the veracode package command. You can use the auto-packager with various integrations, but the CLI output examples help you visualize the packaging process. All examples assume the location of the CLI executable is in your PATH. You might see different output in your environment. Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A supported version of .NET. PATH environment variable that points to the dotnet or msbuild command. Your projects must: Contain at least one syntactically correct .csproj file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Recursively searches your repo for all .csproj submodules. To publish an SDK-style project, runs the following command: dotnet publish -c Debug -p:UseAppHost=false -p:SatelliteResourceLanguages='en' -p:WasmEnableWebcil=false -p:BlazorEnableCompression=false To publish a .NET Framework project, runs a command similar to the following: msbuild Project.csproj /p:TargetFrameworkVersion=v4.5.2 /p:WebPublishMethod="FileSystem" /p:PublishProvider=FileSystem /p:LastUsedBuildConfiguration=Debug /p:LastUsedPlatform=Any CPU /p:SiteUrlToLaunchAfterPublish=false /p:LaunchSiteAfterPublish=false /p:ExcludeApp_Data=true /p:PrecompileBeforePublish=true /p:DeleteExistingFiles=true /p:EnableUpdateable=false /p:DebugSymbols=true /p:WDPMergeOption="CreateSeparateAssembly" /p:UseFixedNames=true /p:UseMerge=false /p:DeployOnBuild=true Filters out any test projects. Packages the published project and saves the artifacts of your packaged project in the specified --output location. veracode package --source path/to/project/bobs-used-bookstore-sample --output verascan --trust Packager initiated... Verifying source project language ... Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Data'. Publish successful. Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Web'. Publish successful. Project Bookstore.Web zipped and saved to: path\to\verascan\veracode-auto-pack-Bookstore.Web-dotnet.zip DotNet project Bookstore.Web JavaScript packaged to: path\to\verascan\veracode-auto-pack-Bookstore.Web-dotnet-js.zip Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Cdk'. Publish successful. Project Bookstore.Cdk zipped and saved to: path\to\verascan\veracode-auto-pack-Bookstore.Cdk-dotnet.zip Packaging DOTNET artifacts for DotNetPackager project 'Bookstore.Domain'. Publish successful. Successfully created 3 artifact(s). Created DotNet artifacts for DotNetPackager project. Total time taken to complete command: 11.656s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct Java or Kotlin version present in the environment for packaging the application. Correct Android SDK version present in the environment for packaging the application. Other dependencies installed based on the repository dependency. The auto-packager completes the following steps, as shown in the example command output. To build a Gradle project, runs the command gradlew clean build -x test Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/sunflower --output verascan --trust Packaging code for project sunflowe. Please wait; this may take a while... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/app-benchmark.apk. Copied artifact: path/to/verascan/app-debug.apk. Copied artifact: path/to/verascan/macrobenchmark-benchmark.apk. Successfully created 3 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 1m35.117s Before you can run the auto-packager, you must meet the following requirements: Your COBOL programs must be in UTF-8 encoded files with one of the following extensions: .cob, .cbl, .cobol, or .pco. Your COBOL copybooks must be in UTF-8 encoded .cpy files. Veracode recommends you include all copybooks to generate the best scan results. The auto-packager completes the following steps, as shown in the example command output. Finds all the files matching the required extensions and packages them in a ZIP archive (artifact). Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/EnterpriseCOBOLv6.3 --output verascan --trust Packaging code for project EnterpriseCOBOLv6.3. Please wait; this may take a while... Verifying source project language ... [GenericPackagerCobol] Packaging succeeded for the path path/to/project/EnterpriseCOBOLv6.3 Successfully created 1 artifact(s). Created Cobol artifacts for GenericPackagerCobol project. Total time taken to complete command: 3.802s Before you can run the auto-packager, you must meet the following requirements: All project files and libraries have been compiled with debug information defined in the packaging guidelines. Auto-packaging must run on supported Linux OS architecture and distribution. For efficient packaging, all binaries and libraries have been collected in a single folder. The auto-packager completes the following steps, as shown in the example command output. Detects a Veracode-supported Linux OS architecture. If it does not detect a supported architecture, the auto-packager throws an error and exits packaging. Detects a Veracode-supported Linux OS distribution. Searches the prebuilt binary directory to find scan-supported binary files, then archives them in a single artifact. veracode package --source path/to/project/CppProjectLibsAndExecutables --output verascan --trust Packaging code for project CppProjectLibsAndExecutables. Please wait; this may take a while... Verifying source project language ... C/CPP project CppProjectLibsAndExecutables packaged to: /path/to/verascan/veracode-auto-pack-CppProjectLibsAndExecutables-c_cpp.zip Successfully created 1 artifact(s). Created CPlusPlus artifacts for GenericPackagerCPP project. Total time taken to complete command: 37.257s Before you can run the auto-packager, you must meet the following requirements: The project must contain at least one .sln file that is configured to build at least one supported C++ project. A supported C++ project is defined by a .vcxproj file where the following are true: Defines a supported project configuration: Targets a supported platform (x64 or Win32) Builds a supported binary (ConfigurationType is Application or DynamicLibrary) Is not a test Native Unit Test project or Google Unit Test project. msbuild command is available in the environment. Code can compile without errors. The auto-packager completes the following steps, as shown in the example command output. Searches the project directories to find supported .sln files. The search stops at each directory level where it finds supported files. For each .sln file found: Determines the solution configuration to use to build the top-level projects. If available, it uses the first solution configuration listed in the solution that has a supported project platform for a top-level C++ project, configured as a debug build. Determines the supported top-level C++ projects for that solution configuration. A top-level C++ project is a C++ project that is not a dependency of any other project configured to build for that solution configuration. Builds each supported top-level C++ project using compiler and linker settings required for Veracode to analyze Windows C/C++ applications: <ItemDefinitionGroup> <ClCompile> <DebugInformationFormat>ProgramDatabase</DebugInformationFormat> <Optimization>Disabled</Optimization> <BasicRuntimeChecks>Default</BasicRuntimeChecks> <BufferSecurityCheck>false</BufferSecurityCheck> </ClCompile> <Link> <LinkIncremental>false</LinkIncremental> <GenerateDebugInformation>true</GenerateDebugInformation> <ProgramDatabaseFile>$(OutDir)$(TargetName).pdb</ProgramDatabaseFile> </Link> </ItemDefinitionGroup> Creates an archive for each solution named veracode-auto-pack-$(SolutionName)-msvc.zip. Each archive contains a $(ProjectName) directory with all .exe, .dll, and .pdb build artifacts for each top-level project build target of the solution. veracode package --source path/to/project/example-cpp-windows --output verascan --trust Packaging code for project example-cpp-windows. Please wait; this may take a while... Verifying source project language ... Packaging Windows C/C++ artifacts for WinCppPackager publish path 'C:\Users\...\AppData\Local\Temp\2766238912731991934'. MSBuild commands successfully completed. Windows solution WS_AllSource packaged to: path\to\verascan\veracode-auto-pack-WS_AllSource-msvc.zip Packaging Windows C/C++ artifacts for WinCppPackager publish path 'C:\Users\...\AppData\Local\Temp\7662002083651398436'. MSBuild commands successfully completed. Windows solution allPepPCIF packaged to: path\to\verascan\veracode-auto-pack-allPepPCIF-msvc.zip Successfully created 2 artifact(s). Created Windows C/C++ artifacts for WinCppPackager project. Total time taken to complete command: 3m38.473s Before you can run the auto-packager, you must meet the following requirements: To ensure that Flutter installs successfully and validates all platform tools, successfully run flutter doctor. To generate an iOS Archive file, the project must be able to run the command: flutter build ipa --debug To generate an Android APK file, the project must be able to run the command: flutter build apk --debug The auto-packager completes the following steps, as shown in the example command output. Gathers APK and IPA files. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/flutter-wonderous-app --output verascan --trust Packaging code for project flutter-wonderous-app. Please wait; this may take a while... Verifying source project language ... Copying artifacts for Dart Flutter for FlutterPackager project. Copied artifact: path/to/verascan/app-debug.apk. Successfully created 1 artifact(s). Created Dart artifacts for FlutterPackager project. Total time taken to complete command: 54.731s Before you can run the auto-packager, you must meet the following requirements: Your environment must have a supported version of Go. Your projects must: Support Go Modules. Contain a go.sum file and a go.mod file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package a project, including the source code and the vendor folder, runs the command go mod vendor. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/sftpgo --output verascan --trust Please ensure your project builds successfully without any errors. Packaging code for project sftpgo. Please wait; this may take a while... Verifying source project language ... Packaging GO artifacts for GoModulesPackager project 'sftpgo'. go mod vendor successful. Go project sftpgo packaged to: path/to/verascan/veracode-auto-pack-sftpgo-go.zip Successfully created 1 artifact(s). Created GoLang artifacts for GoModulesPackager project. Total time taken to complete command: 15.776s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Xcode and the xcodebuild command-line tool installed. gen-ir installed. For example: # Add the brew tap to your local machine brew tap veracode/tap # Install the tool brew install gen-ir pod installed, if your projects use CocoaPods or third party tools. Your projects must compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Checks that the podfile or podfile.lock files are present. Runs the command pod install. Checks that the .xcworkspace or .xcodeproj files are present. To build and package the project, runs: xcodebuild clean archive -PROJECT/WORKSPACE filePath -scheme SRCCLR_IOS_SCHEME -destination SRCCLR_IOS_DESTINATION -configuration SRCCLR_IOS_CONFIGURATION -archivePath projectName.xcarchive DEBUG_INFORMATION_FORMAT=dwarf-with-dsym ENABLE_BITCODE=NO The SRCCLR values are optional environment variables you can use to customize the xcodebuild archive command. Runs gen-ir on the artifact of your packaged project and the log files. Saves the artifact in the specified --output location. veracode package --source https://github.com/signalapp/Signal-iOS --type repo --output verascan --trust Packager initiated... Verifying source project language ... Packaging iOS artifacts for IOSPackager project 'MyProject'. iOS Project MyProject zipped and saved to: path/to/verascan/veracode-auto-pack-MyProject-ios-xcarchive.zip Successfully created 1 artifact(s). Created IOS artifacts for IOSPackager project. Total time taken to complete command: 9.001s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you tested to successfully compile your application. Access to a gradlew command that points to the correct JAVA_HOME directory. If gradlew is not available, ensure the correct Gradle version is installed. Your projects must: Have the correct build.gradle file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build the Gradle project and package it as a JAR file, runs the command gradlew clean build -x test. Copies the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-java-gradle --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/example-java-gradle-1.0-SNAPSHOT.jar. Successfully created 1 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 7.174s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you tested to successfully compile your application. Access to a mvn command that points to the correct JAVA_HOME directory. Your projects must: Have the correct pom.xml file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the Maven project, runs the command mvn clean package. Copies the artifact, such as JAR, WAR, EAR, of your packaged project to the specified --output location. veracode package --source path/to/project/example-java-maven --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for Maven project. Copied artifact: path/to/verascan/example-java-maven-1.0-SNAPSHOT.jar. Successfully created 1 artifact(s). Created Java artifacts for Maven project. Total time taken to complete command: 6.799s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The NPM or Yarn package manager installed. The correct Node, NPM, or Yarn version to package the project. Your projects must: Be able to resolve all dependencies with commands npm install or yarn install. Have the correct package.json file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the project, runs one of the following commands: For NPM, runs the command npm install. For Yarn, runs the command yarn install. Copies the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-javascript --output verascan --trust Packager initiated... Verifying source project language ... Packaging Javascript artifacts for NPM project. Project example-javascript packaged to path/to/veracsan/veracode-auto-pack-example-javascript-js.zip. Successfully created 1 artifact(s). Created Javascript artifacts for NPM project. Total time taken to complete command: 3.296s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The correct Kotlin version for your projects. The Maven or Gradle package manager installed. A Java version that your packager manager requires. Your projects must: Have the correct pom.xml, build.gradle, or build.gradle.kts file. Compile successfully without errors. The auto-packager completes the steps shown in the following example command output. Verifies that your project language is supported. Uses Gradle to builds and packages the project. Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/kotlin-server-side-sample/gradle --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for GradlePackager project. Copied artifact: path/to/verascan/demo-0.0.1-SNAPSHOT-plain.jar. Copied artifact: path/to/verascan/demo-0.0.1-SNAPSHOT.jar. Successfully created 2 artifact(s). Created Java artifacts for GradlePackager project. Total time taken to complete command: 8.632s Before you can run the auto-packager, you must meet the following requirements: Your Perl project must be a version 5.x Your project must contain at least one file with the following extensions: of .pl, .pm, .plx, .pl5, or .cgi The auto-packager completes the following steps, as shown in the example command output. Finds all the files matching the required extensions and packages them in a ZIP archive (artifact). Copies the artifacts of your packaged project to the specified --output location. veracode package --source path/to/project/bugzilla --output verascan --trust Packaging code for project bugzilla. Please wait; this may take a while... Verifying source project language ... Packaging code for project bugzilla. Please wait; this may take a while... Verifying source project language ... [GenericPackagerPerl] Packaging succeeded for the path path/to/project/bugzilla. Successfully created 1 artifact(s). Created Perl artifacts for GenericPackagerPerl project. Total time taken to complete command: 9.965s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct PHP version for your projects. Composer dependency manager installed. Your projects must: Have the correct PHP composer.json file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To build and package the project source code and lock file with Composer, runs the command composer install. Saves the artifacts of your packaged project in the specified --output location. veracode package --source path/to/project/example-php --output verascan --trust Packager initiated... Validating output path ... Packaging PHP artifacts for Composer project. Project captainhook zipped and saved to path/to/verascan/veracode-auto-pack-captainhook-php.zip. Packaging PHP artifacts for Composer project. Project template-integration zipped and saved to path/to/verascan/veracode-auto-pack-template-integration-php.zip. Successfully created 2 artifact(s). Created PHP artifacts for Composer project. Total time taken to complete command: 3.62s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The correct pip and Python or pyenv version for packaging your project are installed. A package manager configuration file with the required settings to resolve all dependencies. Your projects must compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. To resolve all third party dependencies and generate the lock file, PIP install, runs the command pip install -r requirements.txt. Packages the project source code, lock file, and vendor folder. Saves the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/example-python --output verascan --trust Packager initiated... Verifying source project language ... Packaging Python artifacts for PIP project. Project example-python zipped and saved to path/to/verascan/veracode-auto-pack-example-python-python.zip. Successfully created 1 artifact(s). Created Python artifacts for PIP project. Total time taken to complete command: 14.359s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: Correct version of Node, NPM, or Yarn for your projects. NPM or Yarn installation resolves all dependencies. Have the correct package.json file. Package.json file has the React Native version as a dependency. The auto-packager completes the following steps, as shown in the example command output. For NPM applications, runs the npm install command. For Yarn applications, runs the yarn install command. For Expo build, runs the expo start command. veracode package --source path/to/project/example-javascript-yarn --output verascan --trust Packaging code for project example-javascript-yarn. Please wait; this may take a while... Verifying source project language ... Packaging Javascript artifacts for Yarn project. JavaScript project example-javascript-yarn packaged to: path/to/verascan/veracode-auto-pack-example-javascript-yarn-js.zip Successfully created 1 artifact(s). Created Javascript artifacts for Yarn project. Total time taken to complete command: 1m9.13s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: The Bundler package manager installed with the correct Ruby version. The Veracode packager gemfile installed. This gemfile handles pre-processing of Rails projects for Static Analysis. The ability to run the command bundle install Your projects must compile successfully without errors. Optionally, to test your configured environment, run the command rails server. The auto-packager completes the following steps, as shown in the example command output. To configure the vendor path, runs the command bundle config --local path vendor. Runs the command bundle install without development and test: bundle install --without development test. To check for the Rails installation, runs the command bundle info rails. If Rails is not installed, the auto-packager assumes it is not a Rails project and exits. To install the Veracode packager gem, runs the command bundle add veracode. To package your project using the Veracode packager gem, runs the command bundle exec veracode. Saves the artifact of your packaged project to the specified --output location. veracode package --source path/to/project/rails --output verascan --trust Packager initialized... Verifying source project language ... Packaging Ruby artifacts for RubyPackager project 'veracode-rails-20240321225855.zip'. ArtifactPath: /rails/tmp/veracode-rails-20240321225855.zip ValidatedSource: /rails ValidatedOutput: /rails/verascan Project name: rails 44824469 bytes written to destination file. Path: /rails/verascan/rails.zip temporary zip file deleted. Path: /rails/tmp/veracode-rails-20240321225855.zip Successfully created 1 artifact(s). Created Ruby artifacts for RubyPackager project. Total time taken to complete command: 1m27.428s Before you can run the auto-packager, you must meet the following requirements: Your environment must have: A JDK version that you have tested to successfully package your application. The Maven, Gradle, or sbt package manager installed with the correct Java version. Your projects must: Have the correct pom.xml, build.gradle, or build.sbt file. Compile successfully without errors. The auto-packager completes the following steps, as shown in the example command output. Runs the sbt assembly command sbt clean assembly. This command assists in creating a JAR file with dependencies in non-Spring projects, which improves SCA scanning. If sbt assembly fails, runs the sbt package command sbt clean package. Copies the artifacts of your packaged application to the specified --output location. veracode package --source path/to/project/packSample/zio-quill --output verascan --trust Packager initiated... Verifying source project language ... Copying Java artifacts for SbtPackager project. Copied artifact: path/to/verascan/quill-cassandra_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-pekko_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-cassandra-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen-jdbc_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-codegen-tests_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-core_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-doobie_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-engine_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-h2_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-mysql_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-oracle_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-postgres_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-sqlite_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-test-sqlserver_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-jdbc-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-monix_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-orientdb_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-spark_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-sql_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-sql-test_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-util_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill-zio_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/quill_2.13-4.8.2+3-d2965801-SNAPSHOT.jar. Copied artifact: path/to/verascan/zio-quill-docs_2.12-4.8.2+3-d2965801-SNAPSHOT.jar. Successfully created 28 artifact(s). Created Java artifacts for SbtPackager project. Total time taken to complete command: 45.428s
8585
dbpedia
0
11
https://managingosx.wordpress.com/2015/07/30/using-autopkg-for-general-purpose-packaging/
en
Using autopkg for “general purpose” packaging
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://1.gravatar.com/avatar/dfd8aecb09520679ecbb7faaf0a85350394c501ea61961477c996fe4fd55d308?s=50&d=identicon&r=G", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2015-07-30T00:00:00
A few days ago I made a simple tool for building packages available: munkipkg. https://github.com/munki/munki-pkg I got many comments and suggestions for additional features and all sorts of cool additions. Some have even been added to the tool already. But I would like to keep munkipkg a pretty simple, basic tool. The Luggage (https://github.com/unixorn/luggage) has…
en
https://s1.wp.com/i/favicon.ico
Managing OS X
https://managingosx.wordpress.com/2015/07/30/using-autopkg-for-general-purpose-packaging/
A few days ago I made a simple tool for building packages available: munkipkg. https://github.com/munki/munki-pkg I got many comments and suggestions for additional features and all sorts of cool additions. Some have even been added to the tool already. But I would like to keep munkipkg a pretty simple, basic tool. The Luggage (https://github.com/unixorn/luggage) has been around for a while; if munkipkg is too simple for your needs, please have look at that. I also suggested to several people that if they had more complex needs than munkipkg could handle, it might make more sense to use autopkg, which supports very complex, customizable workflows. I could tell by the awkward silence that my suggestion was confusing to some — that they had trouble grokking how to use autopkg to build packages “from scratch”, using files and scripts on the local disk. So I created a GitHub repo demonstrating how to use autopkg in this manner. It’s here: https://github.com/gregneagle/autopkg-packaging-demo munkipkg comes with three demo package projects. Two of the packages install files, the third is a “payload-free” package that simply runs a script when installed. The autopkg-packaging-demo duplicates these packages, but uses autopkg to build them instead of munkipkg. (One could also imagine building these packages using either tool: the payload and scripts directories would be the same — in other words, you could have both a build-info.plist for munkipkg and a recipe for autopkg in the same package project directory.) Assuming you have autopkg installed, you can `git clone` the repo, or download and expand the zip file, and run the autopkg recipes within. I hope this clears up some confusion, and sparks some new ideas!
8585
dbpedia
3
13
https://macadmins.psu.edu/workshops/
en
Workshops
https://macadmins.psu.ed…dd5c06f66b58.png
https://macadmins.psu.ed…dd5c06f66b58.png
[ "https://macadmins.psu.edu/files/2015/10/dmtbanner3.png" ]
[]
[]
[ "" ]
null
[]
2014-03-03T21:07:57+00:00
The MacAdmins Conference registration includes a day of workshops, held the first day of the conference. In 2024, this will be July 9th!  Breakfast, lunch, light snacks, and dinner will be served d…
en
https://macadmins.psu.ed…f66b58-32x32.png
MacAdmins Conference
https://macadmins.psu.edu/workshops/
The MacAdmins Conference registration includes a day of workshops, held the first day of the conference. In 2024, this will be July 9th! Breakfast, lunch, light snacks, and dinner will be served during the day. These workshops are built to help attendees gain a solid foundation in a focused aspect of system administration. Full Day Workshops: Dive into PowerShell on the Mac Homelab 101: the alchemy, art, and science of a home lab Learn How You Manage Mac Clients with GitOps – a Hands-On Walkthrough Managing Macs for n00bs Half-Day Workshops: (Offered once either in the morning or afternoon, actual times TBD) Create autopkg recipes for software from scratch Get started with deploying Apple devices Git for Mac Admins ITIL 4 – Is It Right For Me? Create AutoPkg recipes for software from scratch – James Stewart Half Day Workshop (150 min) – Intermediate – Hands-on Learn how to create AutoPkg recipes for software from scratch. Autopkg can distribute software updates from vendor websites to your distribution tool of choice, saving thousands of dollars a week. Being comfortable on the command line is recommended, but previous knowledge of AutoPkg is optional. While AutoPkg is written in Python, writing Python code is not required to write recipes and will not be covered. Recipes use YAML text files. Any prerequisites: Yes – but optional. A laptop with Visual Studio Code and Python 3.9+ (Mac or Win or Linux)Watch previous talk ( Using AutoPKG for Windows Software 2.0 ) https://www.youtube.com/watch?v=BDdcXtjv6y4 Dive into PowerShell on the Mac – John Welch Full Day Workshop (300 min) – Intermediate – Hands-on A workshop that will help you learn about Microsoft’s cross-platform scripting language, PowerShell, and how Mac admins can make use of its features to make their day so much easier Any prerequisites: Yes – it’s required. A reasonably current MacBook, Visual Studio Code, and the current version of PowerShell for macOS. Scripting experience, especially in non-Mac environments is a big help. Get started with deploying Apple devices – Apple Education Half Day Workshop (150 min) – Fundamental – Presentation This session is ideal for admins who are new to deploying and managing Apple devices, expanding from a pilot, or needing a refresh in IT best practices. Join us to learn the basics of zero-touch deployment, device configuration, federated authentication, managing software updates, and more. Any prerequisites: None needed. Git for Mac Admins – Weldon Dodd Half Day Workshop (150 min) – Fundamental – Hands-on Git is the most popular distributed version control system on the planet for software development. It also has broad application for Mac admins to use with managing scripts, configuration profiles, AutoPkg recipes, munki configs, and other files that form the backbone of admin tooling. In this workshop you’ll learn the basics of installing git on your computer, using git locally to manage version control, and using git with remote repositories (think GitHub, GitLab, etc.). Any prerequisites: Yes – but optional. Attendees will need a computer with the latest release version of Xcode installed and should be conversant with using Terminal.app. Familiarity with Bash or ZSH will be helpful. Homelab 101: the alchemy, art, and science of a home lab – Adam Wickert, Bryan Heinz Full Day Workshop (300 min) – All Levels – Hands-on In this workshop, we’ll work through setting up a small lab environment for testing out new software and services. We’ll work through different virtualization environments such as XCP-ng, Proxmox, Apple’s hypervisor framework, and containerization. We’ll discuss hardware and networking. Finally, we’ll be working through setting up a service or two using these methods. Any prerequisites: None needed. ITIL 4 – Is It Right For Me? – Pam Lefkowitz Half Day Workshop (150 min) – All Levels – Presentation Because you don’t have enough certifications yet, we give you ITIL 4. Normally taught over 20-ish hours, ITIL is a hefty course of study. In this workshop we will do a high-level overview of the framework. NOTE: this workshop is NOT an official ITIL 4 Foundations training course. Any prerequisites: None needed. Learn How You Manage Mac Clients with GitOps – a Hands-On Walkthrough – Henry Stamerjohann, Éric Falconnier Full Day Workshop (300 min) – Intermediate – Hands-on Are you GitOps curious? In this hands-on workshop we will walk you through your own, fictional rollout of a Mac client based on GitOps workflows: Complete with patch management, software updates and compliance checks. By the end of day, you will have a first-hand impression about what it’s like to enroll and manage Macs with GitOps automations. As we go along, you can learn useful techniques and tools you can apply to your day-to-day, whether GitOps or not. Any prerequisites: Yes – but optional. You do not need to be an expert in Git to follow along, but familiarity with this tool will definitely help. We will edit text files and run some commands in the Terminal, so bring a Mac if you want to participate in the hands-on parts of the workshop. It would be great if you could bring macOS or iOS test devices. Managing Macs for n00bs – Damien Barrett, Robert Hammen, Adam Anklewicz Full Day Workshop (300 min) – Fundamental – Presentation New to managing Macs? This is the workshop for you. We’ll discuss and share basic, intermediate, and some advanced management techniques, tools, and best practices. While there will be info relevant to everyone, this workshop’s audience is for people new to Mac administration. Any prerequisites: None needed.
8585
dbpedia
1
9
https://snyk.io/advisor/python/autopackage
en
autopackage - Python Package Health Analysis
https://res.cloudinary.c…/autopackage.png
https://res.cloudinary.c…/autopackage.png
[ "https://snyk.io/advisor/images/snyk-icon-pypi.svg", "https://snyk.io/advisor/images/snyk-icon-npm.svg", "https://snyk.io/advisor/images/snyk-icon-pypi.svg", "https://snyk.io/advisor/images/snyk-icon-golang.svg", "https://snyk.io/advisor/images/snyk-icon-docker.svg", "https://snyk.io/advisor/images/snyk-icon-alert.svg", "https://res.cloudinary.com/hl8zoliad/image/upload/w_80,f_auto/python_80.png", "https://snyk.io/advisor/images/arrow.svg", "https://snyk.io/advisor/images/snyk-poweredby.svg", "https://snyk.io/advisor/images/snyk-icon-external.svg", "https://snyk.io/advisor/images/snyk-icon-alert.svg", "https://snyk.io/advisor/images/snyk-badge-copy.svg", "https://snyk.io/advisor/images/snyk-badge-copy.svg", "https://snyk.io/advisor/images/snyk-icon-scan-reversed.svg", "https://snyk.io/advisor/images/snyk-icon-arrow.svg", "https://snyk.io/advisor/images/snyk-icon-intellij.svg", "https://snyk.io/advisor/images/snyk-icon-external.svg", "https://snyk.io/advisor/images/snyk-icon-arrow.svg", "https://snyk.io/advisor/images/snyk-icon-todo-complete.svg", "https://snyk.io/advisor/images/snyk-icon-todo.svg", "https://snyk.io/advisor/images/snyk-icon-todo.svg", "https://snyk.io/advisor/images/snyk-icon-todo.svg", "https://snyk.io/advisor/images/footer-twitter-icon.svg", "https://snyk.io/advisor/images/footer-youtube-icon.svg", "https://snyk.io/advisor/images/footer-facebook-icon.svg", "https://snyk.io/advisor/images/footer-linkedin-icon.svg", "https://snyk.io/advisor/images/footer-github-icon.svg", "https://snyk.io/advisor/images/footer-npm-icon.svg", "https://snyk.io/advisor/images/footer-community-banner.svg", "https://snyk.io/advisor/images/snyk-wordmark.svg" ]
[]
[]
[ "" ]
null
[]
null
Learn more about autopackage: package health score, popularity, security, maintenance, versions and more.
en
https://snyk.io/favicon.png
Snyk Advisor
https://snyk.io/advisor/python/autopackage
1.2 (Latest) 1.2 Latest See all versions Security and license risk for latest version Release Date Jul 22, 2022 Direct Vulnerabilities 0 C 0 H 0 M 0 L Indirect Vulnerabilities 0 C 0 H 0 M 0 L License Risk 1 H 0 M 0 L All security vulnerabilities belong to production dependencies of direct and indirect packages. License GPL-3.0 Non-Permissive License We noticed that this project uses a license which requires less permissive conditions such as disclosing the source code, stating changes or redistributing the source under the same license. It is advised to further consult the license terms before use. Security Policy No We found a way for you to contribute to the project! Looks like autopackage is missing a security policy. You can connect your project's repository to Snyk to stay up to date on security alerts and receive automatic fix pull requests. Keep your project free of vulnerabilities with Snyk Commit Frequency Unavailable commit data Open Issues ? Open PR ? Last Release 2 years ago Last Commit unknown Further analysis of the maintenance status of autopackage based on released PyPI versions cadence, the repository activity, and other data points determined that its maintenance is Inactive. An important project maintenance signal to consider for autopackage is that it hasn't seen any new versions released to PyPI in the past 12 months, and could be considered as a discontinued project, or that which receives low attention from its maintainers. In the past month we didn't find any pull request activity or change in issues status has been detected for the GitHub repository.
8585
dbpedia
2
49
https://fastapi.tiangolo.com/advanced/generate-clients/
en
Generate Clients ¶
https://fastapi.tiangolo…rate-clients.png
https://fastapi.tiangolo…rate-clients.png
[ "https://fastapi.tiangolo.com/img/sponsors/cryptapi-banner.svg", "https://fastapi.tiangolo.com/img/sponsors/platform-sh-banner.png", "https://fastapi.tiangolo.com/img/sponsors/porter-banner.png", "https://fastapi.tiangolo.com/img/sponsors/bump-sh-banner.svg", "https://fastapi.tiangolo.com/img/sponsors/scalar-banner.svg", "https://fastapi.tiangolo.com/img/sponsors/propelauth-banner.png", "https://fastapi.tiangolo.com/img/sponsors/coherence-banner.png", "https://fastapi.tiangolo.com/img/sponsors/mongodb-banner.png", "https://fastapi.tiangolo.com/img/sponsors/kong-banner.png", "https://fastapi.tiangolo.com/img/sponsors/zuplo-banner.png", "https://fastapi.tiangolo.com/img/sponsors/fine-banner.png", "https://fastapi.tiangolo.com/img/sponsors/liblab-banner.png", "https://fastapi.tiangolo.com/img/icon-white.svg", "https://fastapi.tiangolo.com/img/icon-white.svg", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image01.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image02.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image03.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image04.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image05.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image06.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image07.png", "https://fastapi.tiangolo.com/img/tutorial/generate-clients/image08.png" ]
[]
[]
[ "" ]
null
[]
null
FastAPI framework, high performance, easy to learn, fast to code, ready for production
en
../../img/favicon.png
https://fastapi.tiangolo.com/advanced/generate-clients/
Generate Clients¶ As FastAPI is based on the OpenAPI specification, you get automatic compatibility with many tools, including the automatic API docs (provided by Swagger UI). One particular advantage that is not necessarily obvious is that you can generate clients (sometimes called SDKs ) for your API, for many different programming languages. OpenAPI Client Generators¶ There are many tools to generate clients from OpenAPI. A common tool is OpenAPI Generator. If you are building a frontend, a very interesting alternative is openapi-ts. There are also some company-backed Client and SDK generators based on OpenAPI (FastAPI), in some cases they can offer you additional features on top of high-quality generated SDKs/clients. Some of them also ✨ sponsor FastAPI ✨, this ensures the continued and healthy development of FastAPI and its ecosystem. And it shows their true commitment to FastAPI and its community (you), as they not only want to provide you a good service but also want to make sure you have a good and healthy framework, FastAPI. 🙇 For example, you might want to try: Speakeasy Stainless liblab There are also several other companies offering similar services that you can search and find online. 🤓 Generate a TypeScript Frontend Client¶ Let's start with a simple FastAPI application: Notice that the path operations define the models they use for request payload and response payload, using the models Item and ResponseMessage. API Docs¶ If you go to the API docs, you will see that it has the schemas for the data to be sent in requests and received in responses: You can see those schemas because they were declared with the models in the app. That information is available in the app's OpenAPI schema, and then shown in the API docs (by Swagger UI). And that same information from the models that is included in OpenAPI is what can be used to generate the client code. Generate a TypeScript Client¶ Now that we have the app with the models, we can generate the client code for the frontend. Install openapi-ts¶ You can install openapi-ts in your frontend code with: Generate Client Code¶ To generate the client code you can use the command line application openapi-ts that would now be installed. Because it is installed in the local project, you probably wouldn't be able to call that command directly, but you would put it on your package.json file. It could look like this: After having that NPM generate-client script there, you can run it with: That command will generate code in ./src/client and will use axios (the frontend HTTP library) internally. Try Out the Client Code¶ Now you can import and use the client code, it could look like this, notice that you get autocompletion for the methods: You will also get autocompletion for the payload to send: You will have inline errors for the data that you send: The response object will also have autocompletion: In many cases your FastAPI app will be bigger, and you will probably use tags to separate different groups of path operations. For example, you could have a section for items and another section for users, and they could be separated by tags: If you generate a client for a FastAPI app using tags, it will normally also separate the client code based on the tags. This way you will be able to have things ordered and grouped correctly for the client code: In this case you have: ItemsService UsersService Client Method Names¶ Right now the generated method names like createItemItemsPost don't look very clean: ...that's because the client generator uses the OpenAPI internal operation ID for each path operation. OpenAPI requires that each operation ID is unique across all the path operations, so FastAPI uses the function name, the path, and the HTTP method/operation to generate that operation ID, because that way it can make sure that the operation IDs are unique. But I'll show you how to improve that next. 🤓 Custom Operation IDs and Better Method Names¶ You can modify the way these operation IDs are generated to make them simpler and have simpler method names in the clients. In this case you will have to ensure that each operation ID is unique in some other way. For example, you could make sure that each path operation has a tag, and then generate the operation ID based on the tag and the path operation name (the function name). Custom Generate Unique ID Function¶ FastAPI uses a unique ID for each path operation, it is used for the operation ID and also for the names of any needed custom models, for requests or responses. You can customize that function. It takes an APIRoute and outputs a string. For example, here it is using the first tag (you will probably have only one tag) and the path operation name (the function name). You can then pass that custom function to FastAPI as the generate_unique_id_function parameter: Generate a TypeScript Client with Custom Operation IDs¶ Now if you generate the client again, you will see that it has the improved method names: As you see, the method names now have the tag and then the function name, now they don't include information from the URL path and the HTTP operation. Preprocess the OpenAPI Specification for the Client Generator¶ The generated code still has some duplicated information. We already know that this method is related to the items because that word is in the ItemsService (taken from the tag), but we still have the tag name prefixed in the method name too. 😕 We will probably still want to keep it for OpenAPI in general, as that will ensure that the operation IDs are unique. But for the generated client we could modify the OpenAPI operation IDs right before generating the clients, just to make those method names nicer and cleaner. We could download the OpenAPI JSON to a file openapi.json and then we could remove that prefixed tag with a script like this: With that, the operation IDs would be renamed from things like items-get_items to just get_items, that way the client generator can generate simpler method names. Generate a TypeScript Client with the Preprocessed OpenAPI¶ Now as the end result is in a file openapi.json, you would modify the package.json to use that local file, for example: After generating the new client, you would now have clean method names, with all the autocompletion, inline errors, etc: Benefits¶ When using the automatically generated clients you would get autocompletion for: Methods. Request payloads in the body, query parameters, etc. Response payloads. You would also have inline errors for everything. And whenever you update the backend code, and regenerate the frontend, it would have any new path operations available as methods, the old ones removed, and any other change would be reflected on the generated code. 🤓 This also means that if something changed it will be reflected on the client code automatically. And if you build the client it will error out if you have any mismatch in the data used. So, you would detect many errors very early in the development cycle instead of having to wait for the errors to show up to your final users in production and then trying to debug where the problem is. ✨
8585
dbpedia
0
0
https://github.com/autopkg
en
autopkg
https://avatars.githubusercontent.com/u/5170557?s=280&v=4
https://avatars.githubusercontent.com/u/5170557?s=280&v=4
[ "https://avatars.githubusercontent.com/u/5170557?s=200&v=4", "https://avatars.githubusercontent.com/u/23288?s=70&v=4", "https://avatars.githubusercontent.com/u/119358?s=70&v=4", "https://avatars.githubusercontent.com/u/388808?s=70&v=4", "https://avatars.githubusercontent.com/u/614564?s=70&v=4", "https://avatars.githubusercontent.com/u/694298?s=70&v=4", "https://avatars.githubusercontent.com/u/1134568?s=70&v=4", "https://avatars.githubusercontent.com/u/1491562?s=70&v=4", "https://avatars.githubusercontent.com/u/1641092?s=70&v=4", "https://avatars.githubusercontent.com/u/1690064?s=70&v=4", "https://avatars.githubusercontent.com/u/1724950?s=70&v=4", "https://avatars.githubusercontent.com/u/2081075?s=70&v=4", "https://avatars.githubusercontent.com/u/2464974?s=70&v=4", "https://avatars.githubusercontent.com/u/2831320?s=70&v=4", "https://avatars.githubusercontent.com/u/2879972?s=70&v=4", "https://avatars.githubusercontent.com/u/3417399?s=70&v=4", "https://avatars.githubusercontent.com/u/4193278?s=70&v=4", "https://avatars.githubusercontent.com/u/4222606?s=70&v=4", "https://avatars.githubusercontent.com/u/7801391?s=70&v=4", "https://avatars.githubusercontent.com/u/8499030?s=70&v=4", "https://avatars.githubusercontent.com/u/10920581?s=70&v=4" ]
[]
[]
[ "" ]
null
[]
null
autopkg has 165 repositories available. Follow their code on GitHub.
en
https://github.com/fluidicon.png
GitHub
https://github.com/autopkg
We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted
8585
dbpedia
3
91
https://testsigma.com/blog/selenium-with-eclipse/
en
How to Setup and Configure Selenium WebDriver with Eclipse
https://testsigma.com/bl…with-Eclipse.jpg
https://testsigma.com/bl…with-Eclipse.jpg
[ "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/v2/common/testsigma-logo-v3-dark.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/v2/common/testsigma-logo-v3-dark.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/bg-blog-details.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/left-mobile-bg.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/right-mobile-bg.svg", "https://testsigma.com/blog/wp-content/uploads/How-to-Setup-and-Configure-Selenium-WebDriver-with-Eclipse.jpg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/widget-left-bg.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/widget-right-bg.svg", "https://lh5.googleusercontent.com/ZW5Mq_PqDK5q3FaPGvSDX6OMGF2x0ChJn3XegfVAy88GWzALx1xYjThvf4nygXUwIrm-iMdmn9IS7coHbNhj369FI7XxBCE4cnLaKkHpCcC9Uyui_RSKsuMGWGOD5nYoPVXIb0GAApVDZDdFeZ9eVQ", "https://lh6.googleusercontent.com/xlxhr01b6d3zeUZpJ7sOfLxbeLDLqzdod1pWM4vUHpCKA7Pph7lI-3wM50eqszkpRj_WBjeRd37cvdhHGEWBZcIA32CK_2lhvZuTF949c3Xz3seTToO8l0EYg0Qe6r8_hLS_SudM0-ZPWYBHJ63ulA", "https://lh3.googleusercontent.com/O89eeSkI7VLN__VOgtfqNHqRnx8xf255kGSJlIbrc1dJqkOj-0bNwrteXC5_WsF9Kok7D4wQip8fujDXUMcgXHSSnLJVu55WRIeYSqCgY19rgOk-gvqp74L29EVq83W2qb9EUerqbTMgDREQ712jbA", "https://lh6.googleusercontent.com/P-rmiKz4nyy94qA9whKks34O1WyY3xEUVTbRS1AyBorlS4YXayguKh-Wb-pNeZ90nA2huunEh57Q-PPt5rV2DxR-by4_HepQ_xFi1OAO03oigOcjRhHxr_Sg12eFQTy5u4MWI8TeGddmNfQrD-facA", "https://lh3.googleusercontent.com/TnnTBt-nSB7WsFkyxBzIPm8YkqSUhyveOWSoLhC6GXJeEP3oysOVz0nwYZo0tLQdL_Inw-n3U-xFiQ57wZNYyzYUxW-3ZfqQTqhG4r5iuCiXITBJSXUJK4zMGJyyHiou0HQ6Iuhfalsw", "https://lh3.googleusercontent.com/LBrg9-2FB1J1NKm-ru933TJSsbwNRW_6C7Gwo8m4m02-tDI9gpVaTYtcogutc3iUWlRBnBLlJ4-iA0HbtT6SKIO5y_KdpQNDhxld0Eclwsouu4Q6Xo3Vm6jlmn17CjucSulxBncmLBYeWu4bPDnGbA", "https://lh4.googleusercontent.com/Ad2JQ-xIoapGbjWKC8ukKO3g1-ocGXnBmyN7OH6Uc06Hyx6YNn1scxkyEvC69SOTSyJvU7F2tmoosWYFkKuCfWTNj7cq0d9zrcweRzW2Agrk-sB12c7woBqjny9bTCmy7_wXChwoh759", "https://lh5.googleusercontent.com/PeCwiDXO2DsEJLJ399SBHJjH7-wEzkIFsEw9e_WoRRJBRuV-cV2QQ4oiJt_Nal2N2hCZZkNWJtpc78O4ezHmjYef8v3ddwXU-FlOEOOOr3NUbmw3hvpv7HWmJ-cRKgXA1w-KRj1pDpJ_-C2dR0oZ0A", "https://lh6.googleusercontent.com/GbiTluq4VKggwWLJi8EIJ30s5QZTJckSQA5BAx00Tdgh1a-qaJQByMOhW5NOVoG_XD4cNE_WFsJjyUu_yAWqnjZEsAlBkWLgIqQxR3kIsj_Wu1L0mpOo-rzlGW2XevU0GCS-dmjwT0veGt_BrFIFqg", "https://lh6.googleusercontent.com/LNwmmwvtpSQ8OXBdHhncAWc7CxW6hAyVlgWeljSCW7eccsaFt71zDs3EZvOXxEMQBUBpelt4ZPwGYY-wHp61aaeE7dn1JF7op675bv47BRpC4xiJWfTtnAN49If0xVhBE0AjXUBlKOWfAsl990wM6A", "https://lh5.googleusercontent.com/OYtVi-x3xT9Q9hZbhRIFsjWKt8hLsGmzS0bZfcG267jfDWu-oCA_B2MZsA64Dh2DmnmkWOE9jVd3PgEc5KbZQfcRjG_FpyHaF9iDLqNwHcvQSQkaLi6ZURMFC4kQsGoPsdFK3Hf8H0sIHjCWqw7mHw", "https://lh6.googleusercontent.com/LNwmmwvtpSQ8OXBdHhncAWc7CxW6hAyVlgWeljSCW7eccsaFt71zDs3EZvOXxEMQBUBpelt4ZPwGYY-wHp61aaeE7dn1JF7op675bv47BRpC4xiJWfTtnAN49If0xVhBE0AjXUBlKOWfAsl990wM6A", "https://lh6.googleusercontent.com/P9CaQckmD8FhvNgVquHVoaf1TT58A2GBicWfPzrF6oTZ6_mz5YrkWQegY9GtOAk5XjHm7nZCNjdE6gTUoAbUp1J4t9HuZK4OABVL6RwxYFVXm59VH6vzPuMgcXAQWRfg7dApqzdE6LlJXozzwoWczQ", "https://lh5.googleusercontent.com/Y86vtNzoLBLhL0oPAG1lbHHGEJBmjHswN9TAGaw22-CZc5JZXu6mOnween9ZBJwuxmQ4IznUqXb6z5SJLihFkl291HhyAN6INXP4ZbOosMFvQVs1XlnZigvZ1iDptSZFBaFZWiml7FDUVbISFrzIWw", "https://lh3.googleusercontent.com/8hFBkbEhmHyjSO_1WbvTporZJHUgeG95RUdPkYoN3bGZfPzt3eqwWc8JkDLwTj62PWxsZegHWdvRBjASnbOIXjQqdi3KSuwNA5lC_kSBeJ6o3C_SWjaXrCFcUpWp9tyolX8aY1gYZfM5cmL9_nLFTA", "https://lh5.googleusercontent.com/NBuft3DrsBJsSwfcAblEih-m1z1F4yUBb5o4kUpHQEQpJYN0itOAhuZOTXg15WBysV3kEibjSLVafzt9PKCCc9nvqBEr8ytK_4qAsdOPk-DCLE-JunAQgHLogts_IDj7MF4LaOzPxFOwPKFAjcYtrA", "https://lh4.googleusercontent.com/0XSl-Gc7_yayOLX7erOffGj1dNrQLurqHOFrC3GrojxBKHQJD1Ao8oZYTVJR8oTSgzJmclnHo_itsfLUx50s9ji2te6ehvEchgTMNqtHrfihvIRlzRz9JwYyRNEXkcYJe1su9tTv2ovqnK4-5REo1g", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/widget-left-bg.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/widget-right-bg.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/cta-left.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/cta-right.svg", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/background-wave-pattern.png", "https://website-static.testsigma.com/blog/nextjs/4cf99f/images/v2/common/testsigma-logo-v3.svg" ]
[]
[]
[ "" ]
null
[ "Testsigma" ]
null
Let’s learn how to configure Selenium with Eclipse. We’d take a deep dive into the prerequisites, installation, and setup of Selenium WebDriver with Eclipse.
en
https://website-static.testsigma.com/blog/nextjs/4cf99f/favicon.ico
https://testsigma.com/blog/selenium-with-eclipse/
Selenium is an open-source automation testing framework used to automate web browsers. It supports multiple programming languages such as Java, C#, Python, and Ruby. Selenium allows testers to write test scripts that can automate the process of testing web applications, including filling out forms, clicking buttons, and navigating between pages. Selenium WebDriver is a component of the Selenium suite that allows users to control a web browser through a program. It provides a simple API that enables testers to interact with web browsers in a way that simulates a real user. With Selenium WebDriver, you can automate tasks such as clicking buttons, filling out forms, and navigating between pages. In short, Selenium is a testing framework for automating web browsers, and Selenium WebDriver is a component of Selenium that allows users to control a web browser through a program. Advantages of Selenium Open-source: Selenium is an open-source tool, which means that it is free to use and can be modified to meet the specific needs of a project. Cross-browser compatibility: Selenium supports multiple web browsers, including Chrome, Firefox, Internet Explorer, and Safari, which makes it a versatile tool for testing web applications. Multi-language support: Selenium supports multiple programming languages, such as Java, C#, Python, and Ruby, which allows developers to choose the language that they are most comfortable with. Large community: Selenium has a large community of users and developers, which means that there is a wealth of resources, tutorials, and support available for those who are new to the tool. Easy integration: Selenium can be integrated with other tools, such as JUnit and TestNG, to create a complete testing solution. Scalability: Selenium can automate small, simple tasks as well as large, complex tasks, making it a scalable tool that can grow with a project. Cost-effective: Selenium is an open-source tool, which means that it is free to use, making it a cost-effective solution for automating web browsers. We have understood Selenium and its capability. Let’s see how we can configure it with Eclipse. Prerequisites for Installation and Set up: Selenium WebDriver This section will show how we can configure Selenium with Eclipse. Pre-requisites: Download and Install Java Set up the Environment Variables Install WebDrivers Download and Install Eclipse Download and Install Java You can download and Install the Java version based on your requirement. If you want to install it on Windows, you can download Windows x64 Installer JDK from the list. Go to the official Java website. Click on the “Download” button to download the latest version of Java for your operating system. Once the download is complete, run the installer file to begin the installation process. To finish the installation, adhere to the instructions. Once the installation is complete, you can verify that Java is installed by opening a command prompt or terminal window and typing “java -version.” This should display the version of Java that is currently installed on your system. Please note for some specific tasks or applications, you may need to install JDK instead of JRE. You can download JDK from this link https://www.oracle.com/java/technologies/downloads/ Set up the Environment Variables Once the installation is done, you need to set up the Java path in your system. Step 1: Go to your PC setting and click on Advanced system settings from the system dialog window. Step 2: Click on Environment variable button. Step 3: Under System variable, select path and click on Edit button. Now, set the complete Java path and click on save. Step 4: To verify whether Java is installed in your system or not, you can open a Command prompt and run the below command: Java -version You will see the below result once you run that command. Java installation is completed in your system. Step 5: Download and install Selenium Client and WebDriver languages Bindings from the Selenium Website. Selenium standalone server 4 Change Log Extract the downloaded folder of your choice from where you can use it later. This is what an extracted folder looks like. Install drivers for browsers. Installing a web driver for a specific browser is necessary to use Selenium to interact with that browser. The process for installing a webdriver will vary depending on your browser and operating system. Here are some examples of how to install webdrivers for some popular browsers. Chrome: You can download the Chromedriver executable from the following link. Once you have downloaded the executable, you need to add the path of the downloaded driver to your system environment variable. Firefox: You can download the GeckoDriver executable from the following link. Once you have downloaded the executable, you need to add the path of the downloaded driver to your system environment variable. Safari: Safari does not require a separate webdriver to be installed, as it uses the browser’s built-in automation capabilities. Internet Explorer: You can download the InternetExplorerDriver executable from the following link. Once you have downloaded the executable, you need to add the path of the downloaded driver to your system environment variable Download and Install Eclipse for Selenium WebDriver Eclipse is an open-source integrated development environment (IDE) that is widely used for developing Java applications. It provides a wide range of features such as a code editor, debugging tools, and a built-in compiler, making it a popular choice among Java developers. To download and install Eclipse: Step 1: Go to the official Eclipse website and download eclipse. Step 2: Select the version of Eclipse that is appropriate for your operating system. Step 3: Once the download is complete, unzip the archive and extract the contents to a directory on your computer. Step 4- Now user needs to create a workspace. Workspace will store all your test scripts, test results etc. Step 5: Click on Launch button and your Eclipse installation is done. How to get started to write your test scripts with Selenium in Eclipse Follow the below steps before you begin writing your test scripts: Launch eclipse and launch workspace. Create a new project in Eclipse. Create a new package. Create a new class under the package. Inserting Selenium WebDriver Jar files into the project. Let’s understand each point in detail. Launch Eclipse and launch the workspace Double-click on the eclipse.exe which you have installed in the last step. Select your workspace manually or let the default location be there. Create a new project in Eclipse Let’s create our first project in TestSigmaWorkspace. You can create a new project by clicking on File > New > Project Create a new package under the project To create a package, right-click on Project > New > Package Create a new class under the package To create a new class, right-click on Package > New > Class Created class in demo package. Inserting Selenium WebDriver Jar files into the project The next step is to add Selenium JAR files. Right-click on Project and select properties. Select the Java build directory and then select Add External JARs. Project > Properties > Java Build Path > Add External JARs Now you are ready to execute your first script. Conclusion In the above tutorial, we have taken a deep dive into the Installation and setup of Selenium with Eclipse. Selenium can be configured with other IDEs like IntelliJ. We have seen how we can configure, launch eclipse, and create new projects and classes. Selenium is used around 60% worldwide as compared to other frameworks because of being open source. I believe you are now all ready to write the first complete test scripts. Do share your feedback on the implementation. Happy testing…!! Frequently Asked Questions Do you need Eclipse for Selenium? No, Eclipse is not required for Selenium. Selenium is a browser automation tool and can be used with any programming language and development environment. Eclipse is simply one of the many IDEs that can be used to write and run Selenium tests, but it is not necessary. Which IDE is best for Selenium automation? There is no one “best” IDE for Selenium automation as it depends on the user’s preference and requirements. Some popular IDEs for Selenium automation include Eclipse, IntelliJ IDEA, and Visual Studio Code. Each of these IDEs has its strengths and weaknesses and some may be better suited for certain types of projects or programming languages. It is recommended to try out a few different IDEs to find which one works best for you. Which Eclipse is best for Selenium? When it comes to using Eclipse for Selenium, the “best” version of Eclipse would be the one that is most compatible with the programming language and framework you are using for your Selenium tests. Eclipse IDE for Java Developers is one of the most popular options among Selenium users as it is specifically designed for Java development and is pre-configured with the necessary plugins for Selenium development. Eclipse IDE for JavaScript and Web Developers is another option and can be useful if you are using Selenium with JavaScript, HTML, and CSS. Ultimately, the best version of Eclipse for Selenium will depend on your specific needs, so it’s worth exploring different options to find the one that works best for you.
8585
dbpedia
1
67
https://snyk.io/advisor/python/irv-autopkg-client
en
irv-autopkg-client - Python Package Health Analysis
https://res.cloudinary.c…topkg-client.png
https://res.cloudinary.c…topkg-client.png
[ "https://snyk.io/advisor/images/snyk-icon-pypi.svg", "https://snyk.io/advisor/images/snyk-icon-npm.svg", "https://snyk.io/advisor/images/snyk-icon-pypi.svg", "https://snyk.io/advisor/images/snyk-icon-golang.svg", "https://snyk.io/advisor/images/snyk-icon-docker.svg", "https://snyk.io/advisor/images/snyk-icon-alert.svg", "https://res.cloudinary.com/hl8zoliad/image/upload/w_80,f_auto/python_80.png", "https://snyk.io/advisor/images/arrow.svg", "https://snyk.io/advisor/images/snyk-poweredby.svg", "https://snyk.io/advisor/images/snyk-icon-external.svg", "https://snyk.io/advisor/images/snyk-badge-copy.svg", "https://snyk.io/advisor/images/snyk-badge-copy.svg", "https://snyk.io/advisor/images/snyk-icon-scan-reversed.svg", "https://snyk.io/advisor/images/snyk-icon-arrow.svg", "https://snyk.io/advisor/images/snyk-icon-intellij.svg", "https://snyk.io/advisor/images/snyk-icon-external.svg", "https://snyk.io/advisor/images/snyk-icon-arrow.svg", "https://snyk.io/advisor/images/snyk-icon-todo-complete.svg", "https://snyk.io/advisor/images/snyk-icon-todo.svg", "https://snyk.io/advisor/images/snyk-icon-todo.svg", "https://snyk.io/advisor/images/snyk-icon-todo.svg", "https://snyk.io/advisor/images/footer-twitter-icon.svg", "https://snyk.io/advisor/images/footer-youtube-icon.svg", "https://snyk.io/advisor/images/footer-facebook-icon.svg", "https://snyk.io/advisor/images/footer-linkedin-icon.svg", "https://snyk.io/advisor/images/footer-github-icon.svg", "https://snyk.io/advisor/images/footer-npm-icon.svg", "https://snyk.io/advisor/images/footer-community-banner.svg", "https://snyk.io/advisor/images/snyk-wordmark.svg" ]
[]
[]
[ "" ]
null
[]
null
Learn more about irv-autopkg-client: package health score, popularity, security, maintenance, versions and more.
en
https://snyk.io/favicon.png
Snyk Advisor
https://snyk.io/advisor/python/irv-autopkg-client
Open Issues ? Open PR ? Last Release 1 year ago Last Commit unknown Further analysis of the maintenance status of irv-autopkg-client based on released PyPI versions cadence, the repository activity, and other data points determined that its maintenance is Inactive. An important project maintenance signal to consider for irv-autopkg-client is that it hasn't seen any new versions released to PyPI in the past 12 months, and could be considered as a discontinued project, or that which receives low attention from its maintainers. In the past month we didn't find any pull request activity or change in issues status has been detected for the GitHub repository.
8585
dbpedia
2
32
https://www.freecodecamp.org/news/build-your-first-python-package/
en
How to Build Your Very First Python Package
https://cdn-media-2.free…69d1a4ca17ef.jpg
https://cdn-media-2.free…69d1a4ca17ef.jpg
[ "https://cdn.freecodecamp.org/platform/universal/fcc_primary.svg", "https://www.freecodecamp.org/news/content/images/size/w60/2021/10/Jason-FCC-Photo-1.png 60w", "https://cdn-media-2.freecodecamp.org/w1280/5f9c980f740569d1a4ca17ef.jpg", "https://www.freecodecamp.org/news/content/images/size/w60/2021/10/Jason-FCC-Photo-1.png 60w", "https://cdn.freecodecamp.org/platform/universal/apple-store-badge.svg", "https://cdn.freecodecamp.org/platform/universal/google-play-badge.svg" ]
[]
[]
[ "" ]
null
[ "Jason Dsouza" ]
2020-10-27T23:43:25+00:00
A few months ago, I decided to release Caer [http://github.com/jasmcaus/caer], a Computer Vision package available in Python. I found the process to be excruciatingly painful. You can probably guess why  — little (and confusing) documentation, lack of good tutorials, and so on. So I decided to write this article in the
en
https://cdn.freecodecamp.org/universal/favicons/favicon.ico
freeCodeCamp.org
https://www.freecodecamp.org/news/build-your-first-python-package/
A few months ago, I decided to release Caer, a Computer Vision package available in Python. I found the process to be excruciatingly painful. You can probably guess why — little (and confusing) documentation, lack of good tutorials, and so on. So I decided to write this article in the hope that it'll help people who are struggling to do this. We’re going to build a very simple module and make it available to anyone around the world. The contents of this module follow a very basic structure. There are, in total, four Python files, each of which has a single method within it. We’re going to keep this real simple for now. base-verysimplemodule --> Base └── verysimplemodule --> Actual Module ├── extras │ ├── multiply.py │ ├── divide.py ├── add.py ├── subtract.py You will notice that I have a folder called verysimplemodule which, in turn, has two Python files add.py and subtract.py. There is also a folder called extras (which contains multiply.py and divide.py). This folder will form the basis of our Python module. Bringing out the __init__s Something that you’ll always find in every Python package is an __init__.py file. This file will tell Python to treat directories as modules (or sub-modules). Very simply, it will hold the names of all the methods in all the Python files that are in its immediate directory. A typical __init__.py file has the following format: from file import method # 'method' is a function that is present in a file called 'file.py' When building packages in Python, you are required to add an __init__.py file in every sub-directory in your package. These sub-directories are the sub-modules of your package. For our case, we’ll add our __init__.py files to the ‘actual module’ directory verysimplemodule, like this: from add import add from subtract import subtract and we’re going to do the same for the extras folder, like this: from multiply import multiply from divide import divide Once that’s done, we’re pretty much halfway through the process! How to set up setup.py Within the base-verysimplemodule folder (and in the same directory as our module verysimplemodule ), we need to add a setup.py file. This file is essential if you intend to build the actual module in question. Note: Feel free to name the setup.py file as you wish. This file is not name-specific as our __init__.py file is. Possible name choices are setup_my_very_awesome_python_package.py and python_package_setup.py , but it’s usually best practice to stick with setup.py. The setup.py file will contain information about your package, specifically the name of the package, its version, platform-dependencies and a whole lot more. For our purposes, we’re not going to require advanced meta information, so the following code should suit most packages you build: from setuptools import setup, find_packages VERSION = '0.0.1' DESCRIPTION = 'My first Python package' LONG_DESCRIPTION = 'My first Python package with a slightly longer description' # Setting up setup( # the name must match the folder name 'verysimplemodule' name="verysimplemodule", version=VERSION, author="Jason Dsouza", author_email="<youremail@email.com>", description=DESCRIPTION, long_description=LONG_DESCRIPTION, packages=find_packages(), install_requires=[], # add any additional packages that # needs to be installed along with your package. Eg: 'caer' keywords=['python', 'first package'], classifiers= [ "Development Status :: 3 - Alpha", "Intended Audience :: Education", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", ] ) With that done, all we have to do next is run the following command in the same directory as base-verysimplemodule: python setup.py sdist bdist_wheel This will build all the necessary packages that Python will require. The sdist and bdist_wheel commands will create a source distribution and a wheel that you can later upload to PyPi. PyPi — here we come! PyPi is the official Python repository where all Python packages are stored. You can think of it as the Github for Python Packages. To make your Python package available to people around the world, you’ll need to have an account with PyPi. With that done, we’re all set to upload our package on PyPi. Remember the source distribution and wheel that were built when we ran python setup.py ? Well, those are what will actually be uploaded to PyPi. But before you do that, you need to install twine if you don’t already have it installed. It’s as simple as pip install twine. How to upload your package to PyPi Assuming you have twine installed, go ahead and run: twine upload dist/* This command will upload the contents of the dist folder that was automatically generated when we ran python setup.py. You will get a prompt asking you for your PyPi username and password, so go ahead and type those in. Now, if you’ve followed this tutorial to the T, you might get an error along the lines of repository already exists. This is usually because there is a name clash between the name of your package and a package that already exists. In other words, change the name of your package — somebody else has already taken that name. And that’s it! To proudly pip install your module, fire up a terminal and run: pip install <package_name> # in our case, this is pip install verysimplemodule Watch how Python neatly installs your package from the binaries that were generated earlier. Open up a Python interactive shell and try importing your package: >> import verysimplemodule as vsm >> vsm.add(2,5) 7 >> vsm.subtract(5,4) 1 To access the division and multiplication methods (remember that they were in a folder called extras ?), run: >> import verysimplemodule as vsm >> vsm.extras.divide(4,2) 2 >> vsm.extras.multiple(5,3) 15 It’s as simple as that. Congratulations! You’ve just built your first Python package. Albeit very simple, your package is now available to be downloaded by anyone around the world (so long as they have Python, of course). What's next? Test PyPi The package that we used in this tutorial was an extremely simple module — basic mathematical operations of addition, subtraction, multiplication and division. It doesn’t make sense to upload them directly to PyPi especially since you’re trying this out for the first time. Lucky for us, there is Test PyPi, a separate instance of PyPi where you can test out and experiment on your package (you will need to sign up for a separate account on the platform). The process that you follow to upload to Test PyPi is pretty much the same with a few minor changes. # The following command will upload the package to Test PyPi # You will be asked to provide your Test PyPi credentials twine upload --repository testpypi dist/* To download projects from Test PyPi: pip install --index-url "https://test.pypi.org/simple/<package_name>" Advanced Meta Information The meta information we used in the setup.py file was very basic. You can add additional information such as multiple maintainers (if any), author email, license information and a whole host of other data. This article will prove particularly helpful if you intend to do so. Look at other repositories Looking at how other repositories have built their packages can prove to be super useful to you.
8585
dbpedia
1
71
https://jfrog.com/help/r/jfrog-artifactory-documentation/upload-authenticated-pypi-packages-to-jfrog-artifactory
en
JFrog Help Center
https://jfrog.com/help/favicon.ico
https://jfrog.com/help/favicon.ico
[ "https://jfrog.com/help/internal/api/webapp/splash-image?v=199390e0" ]
[]
[]
[ "" ]
null
[]
null
en
https://jfrog.com/help/favicon.ico
null
8585
dbpedia
3
87
https://laravel.com/docs/11.x/packages
en
The PHP Framework For Web Artisans
https://laravel.com/img/og-image.jpg
https://laravel.com/img/og-image.jpg
[ "https://laravel.com/img/logomark.min.svg", "https://laravel.com/img/logotype.min.svg", "https://laravel.com/img/logomark.min.svg", "https://laravel.com/img/logotype.min.svg", "https://laravel.com/img/icons/drop_arrow.min.svg", "https://laravel.com/img/icons/drop_arrow.dark.min.svg", "https://laravel.com/img/logomark.min.svg", "https://laravel.com/img/social/x.dark.min.svg", "https://laravel.com/img/social/x.min.svg", "https://laravel.com/img/social/github.dark.min.svg", "https://laravel.com/img/social/github.min.svg", "https://laravel.com/img/social/discord.dark.min.svg", "https://laravel.com/img/social/discord.min.svg", "https://laravel.com/img/social/youtube.dark.min.svg", "https://laravel.com/img/social/youtube.min.svg" ]
[]
[]
[ "" ]
null
[]
null
Laravel is a PHP web application framework with expressive, elegant syntax. We’ve already laid the foundation — freeing you to create without sweating the small things.
en
/img/favicon/apple-touch-icon.png
https://laravel.com/docs/11.x/packages
Package Development Introduction A Note on Facades Package Discovery Service Providers Resources Configuration Migrations Routes Language Files Views View Components "About" Artisan Command Commands Public Assets Publishing File Groups Introduction Packages are the primary way of adding functionality to Laravel. Packages might be anything from a great way to work with dates like Carbon or a package that allows you to associate files with Eloquent models like Spatie's Laravel Media Library. There are different types of packages. Some packages are stand-alone, meaning they work with any PHP framework. Carbon and Pest are examples of stand-alone packages. Any of these packages may be used with Laravel by requiring them in your composer.json file. On the other hand, other packages are specifically intended for use with Laravel. These packages may have routes, controllers, views, and configuration specifically intended to enhance a Laravel application. This guide primarily covers the development of those packages that are Laravel specific. A Note on Facades When writing a Laravel application, it generally does not matter if you use contracts or facades since both provide essentially equal levels of testability. However, when writing packages, your package will not typically have access to all of Laravel's testing helpers. If you would like to be able to write your package tests as if the package were installed inside a typical Laravel application, you may use the Orchestral Testbench package. Package Discovery A Laravel application's bootstrap/providers.php file contains the list of service providers that should be loaded by Laravel. However, instead of requiring users to manually add your service provider to the list, you may define the provider in the extra section of your package's composer.json file so that it is automatically loaded by Laravel. In addition to service providers, you may also list any facades you would like to be registered: Once your package has been configured for discovery, Laravel will automatically register its service providers and facades when it is installed, creating a convenient installation experience for your package's users. Opting Out of Package Discovery If you are the consumer of a package and would like to disable package discovery for a package, you may list the package name in the extra section of your application's composer.json file: You may disable package discovery for all packages using the * character inside of your application's dont-discover directive: Service Providers Service providers are the connection point between your package and Laravel. A service provider is responsible for binding things into Laravel's service container and informing Laravel where to load package resources such as views, configuration, and language files. A service provider extends the Illuminate\Support\ServiceProvider class and contains two methods: register and boot. The base ServiceProvider class is located in the illuminate/support Composer package, which you should add to your own package's dependencies. To learn more about the structure and purpose of service providers, check out their documentation. Resources Configuration Typically, you will need to publish your package's configuration file to the application's config directory. This will allow users of your package to easily override your default configuration options. To allow your configuration files to be published, call the publishes method from the boot method of your service provider: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->publishes([ __DIR__.'/../config/courier.php'=>config_path('courier.php'), ]); } Now, when users of your package execute Laravel's vendor:publish command, your file will be copied to the specified publish location. Once your configuration has been published, its values may be accessed like any other configuration file: $value=config('courier.option'); Default Package Configuration You may also merge your own package configuration file with the application's published copy. This will allow your users to define only the options they actually want to override in the published copy of the configuration file. To merge the configuration file values, use the mergeConfigFrom method within your service provider's register method. The mergeConfigFrom method accepts the path to your package's configuration file as its first argument and the name of the application's copy of the configuration file as its second argument: /** * Register any application services. */ publicfunctionregister():void { $this->mergeConfigFrom( __DIR__.'/../config/courier.php', 'courier' ); } Routes If your package contains routes, you may load them using the loadRoutesFrom method. This method will automatically determine if the application's routes are cached and will not load your routes file if the routes have already been cached: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->loadRoutesFrom(__DIR__.'/../routes/web.php'); } Migrations If your package contains database migrations, you may use the publishesMigrations method to inform Laravel that the given directory or file contains migrations. When Laravel publishes the migrations, it will automatically update the timestamp within their filename to reflect the current date and time: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->publishesMigrations([ __DIR__.'/../database/migrations'=>database_path('migrations'), ]); } Language Files If your package contains language files, you may use the loadTranslationsFrom method to inform Laravel how to load them. For example, if your package is named courier, you should add the following to your service provider's boot method: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->loadTranslationsFrom(__DIR__.'/../lang', 'courier'); } Package translation lines are referenced using the package::file.line syntax convention. So, you may load the courier package's welcome line from the messages file like so: echotrans('courier::messages.welcome'); You can register JSON translation files for your package using the loadJsonTranslationsFrom method. This method accepts the path to the directory that contains your package's JSON translation files: Publishing Language Files If you would like to publish your package's language files to the application's lang/vendor directory, you may use the service provider's publishes method. The publishes method accepts an array of package paths and their desired publish locations. For example, to publish the language files for the courier package, you may do the following: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->loadTranslationsFrom(__DIR__.'/../lang', 'courier'); $this->publishes([ __DIR__.'/../lang'=>$this->app->langPath('vendor/courier'), ]); } Now, when users of your package execute Laravel's vendor:publish Artisan command, your package's language files will be published to the specified publish location. Views To register your package's views with Laravel, you need to tell Laravel where the views are located. You may do this using the service provider's loadViewsFrom method. The loadViewsFrom method accepts two arguments: the path to your view templates and your package's name. For example, if your package's name is courier, you would add the following to your service provider's boot method: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->loadViewsFrom(__DIR__.'/../resources/views', 'courier'); } Package views are referenced using the package::view syntax convention. So, once your view path is registered in a service provider, you may load the dashboard view from the courier package like so: Route::get('/dashboard', function() { returnview('courier::dashboard'); }); Overriding Package Views When you use the loadViewsFrom method, Laravel actually registers two locations for your views: the application's resources/views/vendor directory and the directory you specify. So, using the courier package as an example, Laravel will first check if a custom version of the view has been placed in the resources/views/vendor/courier directory by the developer. Then, if the view has not been customized, Laravel will search the package view directory you specified in your call to loadViewsFrom. This makes it easy for package users to customize / override your package's views. Publishing Views If you would like to make your views available for publishing to the application's resources/views/vendor directory, you may use the service provider's publishes method. The publishes method accepts an array of package view paths and their desired publish locations: /** * Bootstrap the package services. */ publicfunctionboot():void { $this->loadViewsFrom(__DIR__.'/../resources/views', 'courier'); $this->publishes([ __DIR__.'/../resources/views'=>resource_path('views/vendor/courier'), ]); } Now, when users of your package execute Laravel's vendor:publish Artisan command, your package's views will be copied to the specified publish location. View Components If you are building a package that utilizes Blade components or placing components in non-conventional directories, you will need to manually register your component class and its HTML tag alias so that Laravel knows where to find the component. You should typically register your components in the boot method of your package's service provider: use Illuminate\Support\Facades\Blade; use VendorPackage\View\Components\AlertComponent; /** * Bootstrap your package's services. */ publicfunctionboot():void { Blade::component('package-alert', AlertComponent::class); } Once your component has been registered, it may be rendered using its tag alias: Autoloading Package Components Alternatively, you may use the componentNamespace method to autoload component classes by convention. For example, a Nightshade package might have Calendar and ColorPicker components that reside within the Nightshade\Views\Components namespace: use Illuminate\Support\Facades\Blade; /** * Bootstrap your package's services. */ publicfunctionboot():void { Blade::componentNamespace('Nightshade\\Views\\Components', 'nightshade'); } This will allow the usage of package components by their vendor namespace using the package-name:: syntax: Blade will automatically detect the class that's linked to this component by pascal-casing the component name. Subdirectories are also supported using "dot" notation. Anonymous Components If your package contains anonymous components, they must be placed within a components directory of your package's "views" directory (as specified by the loadViewsFrom method). Then, you may render them by prefixing the component name with the package's view namespace: "About" Artisan Command Laravel's built-in about Artisan command provides a synopsis of the application's environment and configuration. Packages may push additional information to this command's output via the AboutCommand class. Typically, this information may be added from your package service provider's boot method: use Illuminate\Foundation\Console\AboutCommand; /** * Bootstrap any application services. */ publicfunctionboot():void { AboutCommand::add('My Package', fn() => ['Version'=>'1.0.0']); } Commands To register your package's Artisan commands with Laravel, you may use the commands method. This method expects an array of command class names. Once the commands have been registered, you may execute them using the Artisan CLI: use Courier\Console\Commands\InstallCommand; use Courier\Console\Commands\NetworkCommand; /** * Bootstrap any package services. */ publicfunctionboot():void { if ($this->app->runningInConsole()) { $this->commands([ InstallCommand::class, NetworkCommand::class, ]); } } Public Assets Your package may have assets such as JavaScript, CSS, and images. To publish these assets to the application's public directory, use the service provider's publishes method. In this example, we will also add a public asset group tag, which may be used to easily publish groups of related assets: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->publishes([ __DIR__.'/../public'=>public_path('vendor/courier'), ], 'public'); } Now, when your package's users execute the vendor:publish command, your assets will be copied to the specified publish location. Since users will typically need to overwrite the assets every time the package is updated, you may use the --force flag: Publishing File Groups You may want to publish groups of package assets and resources separately. For instance, you might want to allow your users to publish your package's configuration files without being forced to publish your package's assets. You may do this by "tagging" them when calling the publishes method from a package's service provider. For example, let's use tags to define two publish groups for the courier package (courier-config and courier-migrations) in the boot method of the package's service provider: /** * Bootstrap any package services. */ publicfunctionboot():void { $this->publishes([ __DIR__.'/../config/package.php'=>config_path('package.php') ], 'courier-config'); $this->publishesMigrations([ __DIR__.'/../database/migrations/'=>database_path('migrations') ], 'courier-migrations'); }
8585
dbpedia
2
24
https://pypi.org/project/packit/
en
packit
https://pypi.org/static/…er.abaf4b19.webp
https://pypi.org/static/…er.abaf4b19.webp
[ "https://pypi.org/static/images/logo-small.8998e9d1.svg", "https://pypi-camo.freetls.fastly.net/46b1b67c59f02e43a71853aa3169e39531d268e5/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f62663166646361616639383132303363313731363865353738643965613062353f73697a653d3530", "https://pypi-camo.freetls.fastly.net/0ee10e0fd627356c7c76193af525c091c181ba29/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f32636666623235326130373536643961313931346631366362303766343633653f73697a653d3530", "https://pypi-camo.freetls.fastly.net/46b1b67c59f02e43a71853aa3169e39531d268e5/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f62663166646361616639383132303363313731363865353738643965613062353f73697a653d3530", "https://pypi-camo.freetls.fastly.net/0ee10e0fd627356c7c76193af525c091c181ba29/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f32636666623235326130373536643961313931346631366362303766343633653f73697a653d3530", "https://pypi.org/static/images/blue-cube.572a5bfb.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi-camo.freetls.fastly.net/ed7074cadad1a06f56bc520ad9bd3e00d0704c5b/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6177732d77686974652d6c6f676f2d7443615473387a432e706e67", "https://pypi-camo.freetls.fastly.net/8855f7c063a3bdb5b0ce8d91bfc50cf851cc5c51/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f64617461646f672d77686974652d6c6f676f2d6668644c4e666c6f2e706e67", "https://pypi-camo.freetls.fastly.net/df6fe8829cbff2d7f668d98571df1fd011f36192/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f666173746c792d77686974652d6c6f676f2d65684d3077735f6f2e706e67", "https://pypi-camo.freetls.fastly.net/420cc8cf360bac879e24c923b2f50ba7d1314fb0/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f676f6f676c652d77686974652d6c6f676f2d616734424e3774332e706e67", "https://pypi-camo.freetls.fastly.net/524d1ce72f7772294ca4c1fe05d21dec8fa3f8ea/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6d6963726f736f66742d77686974652d6c6f676f2d5a443172685444462e706e67", "https://pypi-camo.freetls.fastly.net/d01053c02f3a626b73ffcb06b96367fdbbf9e230/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f70696e67646f6d2d77686974652d6c6f676f2d67355831547546362e706e67", "https://pypi-camo.freetls.fastly.net/67af7117035e2345bacb5a82e9aa8b5b3e70701d/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f73656e7472792d77686974652d6c6f676f2d4a2d6b64742d706e2e706e67", "https://pypi-camo.freetls.fastly.net/b611884ff90435a0575dbab7d9b0d3e60f136466/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f737461747573706167652d77686974652d6c6f676f2d5467476c6a4a2d502e706e67" ]
[]
[]
[ "" ]
null
[]
2021-08-25T23:31:23+00:00
Python packaging in declarative way (wrapping pbr to make it flexible)
en
/static/images/favicon.35549fe8.ico
PyPI
https://pypi.org/project/packit/
Contents: Rationale Overview Usage Facilities Including Files Other than Python Libraries Further Development Rationale Creating python packages is routine operation that involves a lot of actions that could be automated. Although there are petty good tools like pbr for that purpose, they miss some features and lack flexibility by trying to enforce some strongly opinionated decisions upon you. PacKit tries to solve this by providing a simple, convenient, and flexible way to create and build packages while aiming for following goals: simple declarative way to configure your package through setup.cfg following distutils2 setup.cfg syntax reasonable defaults open for extension Overview PacKit is wrapper around pbr though it only uses it for interaction with setuptools/distutils through simplified interface. None of pbr functions are exposed but instead PacKit provides its own interface. Available facilities Here's a brief overview of currently implemented facilities and the list will be extended as new ones will be added. auto-version - set package version depending on selected versioning strategy. auto-description - set package long description auto-license - include license file into distribution auto-dependencies - populate install_requires and test_requires from requirement files auto-packages - discover packages to include in distribution. auto-extra-meta - add useful options to the metadata config section auto-package-data - include all files tracked by git from package dirs only. auto-tests - make python setup.py test run tests with tox or pytest (depending on tox.ini presence). On top of that PacKit forces easy_install to honor following PIP's fetch directives: index_url find_links Planned facilities auto-plate - integration with platter auto-license - fill out license information auto-pep8 - produce style-check reports auto-docs - API docs generation auto-clean - configurable clean jobs auto-coverage (?) - produce coverage reports while running tests If you don't see desired facilities or have cool features in mind feel free to contact us and tell about your ideas. Usage Create a setup.py in your project dir: : from setuptools import setup setup(setup_requires='packit', packit=True) That was the first and the last time you touched that file for your project. Now let's create a setup.cfg that you will use in order to configure your package: [metadata] name = cool-package And... if you're not doing anything tricky in your package then that's enough! And if you do, take a look at the section below. Facilities Currently all available facilities are enabled by default. Though you can easily turn them off by using facilities section in your setup.cfg: [facilities] auto-version = 0 auto-dependencies = f auto-packages = false auto-package-data = n auto-tests = no If facility is explicitly disabled it won't be used even if facility-specific configuration section is present. Facility-specific defaults and configuration options described below. auto-version When enabled, auto-version will generate and set package version according to selected versioning strategy. Versioning strategy can be selected using type field under auto-version section within setup.cfg. The default is: [auto-version] type = git-pep440 output = src/templates/version.html You can use output field to ask PacKit to write generated version value into specified filename. The specified filename do not need to exist but the parent directories should exist. Provided path should always use forward slashes. git-pep440 Generate PEP440-compliant version from annotated git tags. It's expected that you are using git tags that follow public version identifier description and git-pep440 will just append number of commits since tag was applied to your tag value (the N in public version identifier description). If number of commits since tag equal to 0 (your building the tagged version) the N value won't be appended. Otherwise, it will be appended and local version identifier equal to first 7 chars of commit hash will be also added. Please note: you must create an annotated tag, otherwise it will be ignored. Example: 1. <git tag -a 1.2.3.dev -m "dev release 1.2.3.dev"> -> version is 1.2.3.dev <git commit> -> version is 1.2.3.dev.post1 <git commit> -> version is 1.2.3.dev.post2 <git tag -a 1.2.3.a -m "Release 1.2.3.a"> -> version is 1.2.3.a <git commit> -> version is 1.2.3.a.post1 <git tag -a 1.2.3 -m "Release 1.2.3"> -> version is 1.2.3 <git commit> -> version is 1.2.3.post1 <git commit> -> version is 1.2.3.post2 fixed Use value specified in value (it's required when this strategy is used) under auto-version section in setup.cfg: [auto-version] type = fixed value = 3.3 file Read a line using UTF-8 encoding from the file specified in value (it's required when this strategy is used) under auto-version section in setup.cfg, strip it and use as a version. [auto-version] type = file value = VERSION.txt shell Execute command specified in value (it's required when this strategy is used) under auto-version section in setup.cfg, read a line from stdout, strip it and use as a version composite The most advanced version strategy designed for special cases. It allows you to generate complex version values based on other version strategies. The usage is pretty simple though: [auto-version] type = composite value = {foo}.{bar}+{git} output = main.version [auto-version:foo] type = fixed value = 42 output = 1st.version [auto-version:bar] type = shell value = echo $RANDOM [auto-version:git] type = git-pep440 output = 3rd.version The value field in composite version strategy should be a valid string format expression. Please note that output directives used here only for reference (to show that they can be used anywhere) and are not required. It's OK to define 'extra' version components and not use them but it's an error to not define any of components mentioned in composite version template. auto-description When enabled will fill out long_description for package from a readme. The readme file name could be specified with file field under auto-description section. If no file name provided, it will be discovered automatically by trying following list of files: README readme CHANGELOG changelog Each of these files will be tried with following extensions: <without extension> .md .markdown .mkdn .text .rst .txt The readme file will be included in the package data. auto-license When enabled will include the license file into the distribution. The license file name could be specified by the file field within auto-license section. If license file name is not provided the facility will try to discover it in the current dir trying following file names: LICENSE license Each of these files will be tried with following extensions: <without extension> .md .markdown .mkdn .text .rst .txt auto-dependencies When enabled will fill install_requires and test_requires from requirement files. Requirement files could be specified by install and test fields under the auto-dependencies section of the setup.cfg. If requirements file names not provided then the facility will try to discover them automatically. For installation requirements following paths will be tried: requires requirements requirements/prod requirements/release requirements/install requirements/main requirements/base For testing requirements following paths will be tried: test-requires test_requires test-requirements test_requirements requirements_test requirements-test requirements/test For each path following extensions will be tried <without extension> .pip .txt Once a file is found, PacKit stops looking for more files. You can use vcs project urls and/or archive urls/paths as described in pip usage - they will be split in dependency links and package names during package creation and will be properly handled by pip/easyinstall during installation. Remember that you can also make "includes" relationships between requirements.txt files by including a line like -r other-requires-file.txt. auto-packages When enabled and no packages provided in setup.cfg through packages option under files section will try to automatically find out all packages in current dir recursively. It operates using exclude and include values that can be specified under auto-packages section within setup.cfg. If exclude not provided the following defaults will be used: test, docs, .tox and env. If include not provided, auto-packages will try the following steps in order to generate it: If packages_root value provided under files section in setup.cfg, it will be used. Otherwise the current working dir will be scanned for any python packages (dirs with __init__.py) while honoring exclude value. This packages also will be included into the resulting list of packages. Once include value is determined, the resulting packages list will be generated using following algorithm: for path in include: found_packages |= set(find_packages(path, exclude)) auto-extra-meta When enabled, adds a number of additional options to 'metadata' section. Right now, only 1 extra option supported: is_pure - allows you to override 'purity' flag for distribution, i.e. you can explicitly say whether your distribution is platform-specific or no. auto-tests Has no additional configuration options [yet]. When enabled, the python setup.py test is equal to running: tox if tox.ini is present pytest with pytest-gitignore and teamcity-messages plugins enabled by default otherwise (if you need any other plugins just add them to test requirements) and activate them with additional options (see below) The facility automatically downloads underlying test framework and install it - you don't need to worry about it. You can pass additional parameters to the underlying test framework with '-a' or '--additional-test-args='. auto-package-data See the next section. Including Files Other than Python Libraries Often, you need to include a data file, or another program, or some other kind of file, with your Python package. Here are a number of common situations, and how to accomplish them using packit: Placing data files with the code that uses them: auto-package-data The default is that the auto-package-data facility is enabled. In this configuration, you can include data files for your python library very easily by just: Placing them inside a Python package directory (so next to an __init__.py or in a subdirectory), and Adding them to git version control. setup.cfg src/ src/nicelib/ src/nicelib/__init__.py src/nicelib/things.py src/nicelib/somedata.csv No change in setup.cfg is required. Putting the files here will cause the packaging system to notice them and install them in the same arrangement next to your Python files, but inside the virtualenv where your package is installed. Once this is done, you have several easy options for accessing them, and all of these should work the same way in development and once installed: The least magical way is pathlib.Path(__file__).parent / 'somedata.csv', or some equivalent with os.path calls. This makes your package non-zip-safe, so it can't be used in a pex or zipapp application. The new hotness is importlib.resources.open_text('nicelib', 'somedata.csv') and related functions, available in the stdlib in Python 3.7+ or as a backport in the importlib_resources PyPI package. One limitation is this does not support putting resources deeper in subdirectories. The previous standard has been pkg_resources.resource_stream('nicelib', 'somedata.csv') and related functions. This supports deeper subdirectories, but is much slower than importlib.resources. You shouldn't need to install pkg_resources, it's part of setuptools, which is always available these days. You can turn off the auto-package-data facility if you don't want this file inclusion mechanism to happen: [facilities] auto-package-data = no auto-package-data will not work if your Python package is not at the root of your git repository (setup.py is not next to .git). Placing data files relative to the virtual environment You can also place files relative to the virtualenv, rather than inside the package hierarchy (which would be in virtualenv/lib/python*/site-packages/something). This is often used for things like static files in a Django project, so that they are easy to find for an external web server. The syntax for this is: [files] data_files = dest_dir = src_dir/** dest_dir = file_to_put_there In this example, dest_dir will be created within the top level of the virtualenv. The contents of src_dir will be placed inside it, along with file_to_put_there. If you need to include a compiled executable file in your package, this is a convenient way to do it - include bin = bin/** for example. See the fastatools package for an example of this. There is also a confluence page with more details on including compiled programs. Including Python scripts Scripts need to be treated specially, and not just dropped into bin using data_files, because Python changes the shebang (#!) line to match the virtualenv's python interpreter. This means you can directly run a script without activating a virtualenv - e.g. env/bin/pip install attrs will work even if env isn't activated.[1] If you have some scripts already, the easiest thing is to collect them in one directory, then use scripts: [files] scripts = bin/* Alternatively, setuptools has a special way to directly invoke a Python function from the command line, called the console_scripts entry point. pull-sp-sub is an internal package that uses this: [entry_points] console_scripts = pull-sp-sub = pull_sp_sub:main To explain that last line, it's name-of-the-script = dotted-path-of-the-python-module:name-of-the-python-function. So with this configuration, once the package is installed, setuptools creates a script at $VIRTUAL_ENV/bin/pull-sp-sub which activates the virtualenv and then calls the main function in the pull_sp_sub module. Scripts created this way are slightly slower to start up than scripts that directly run a Python file. Also, setuptools seems to do more dependency checking when starting a script like this, so if you regularly live with broken dependencies inside your virtualenv, this will be frustrating for you. On the other hand, scripts made this way will work better on Windows, if that's one of your target environments. Including compiled shared libraries in both source and binary packages This works because the NCBI Python/Linux environment is so homogeneous, but it does cause problems - these compiled items are linux- and architecture-specific, but this doesn't tell Python's packaging system about that. So for example if you run pip install applog on a Mac, it will claim to succeed, but the library won't work. See the next section for how to do this in a more robust way. This includes things that use the C++ Toolkit (see python-applog and cpp-toolkit-validators for examples). These .so files should get placed inside the python package hierarchy. Presumably, if you're compiling them, they are build artifacts that won't be tracked by git, so they won't be included automatically by auto-package-data. Instead, once they are there, use extra_files to have the packaging system notice them: [files] extra_files = ncbilog/libclog.so ncbilog/libclog.version If your packages live inside a src directory, you do need to include that in the extra_files path: [files] extra_files = src/mypkg/do_something_quickly.so Notice that extra_files is different from data_files which we used above. Including uncompiled C extensions (including Cython) Packit can coexist with setuptools's support for C extensions. Here is an example with a C file that will be compiled on the user's system. In that particular package, the author chose to require Cython for developers but not for end users, so the distribution and the git repo include both the .pyx file and the .c file it's translated to. Known Issues If your Python package is not in the root of your Git repository (so setup.py is not in the same directory as .git), then auto-package-data will not work. The auto-package-data section has configuration options, but they don't do anything right now (PY-504). Further Development Add tests Improve docs More configuration options for existing facilities New facilities Allow extension through entry points
8585
dbpedia
1
26
https://www.jamf.com/resources/videos/zero-touch-packaging-and-patch-management-with-patchbot/
en
Zero-touch packaging and patch management with PatchBot
https://media.jamf.com/i…h=400&q=80&w=700
https://media.jamf.com/i…h=400&q=80&w=700
[ "https://resources.jamf.com/images/logos/jamf-one-color-dark-for-print-css.svg" ]
[]
[]
[ "" ]
null
[]
null
This is a presentation for those who are interested in using the Jamf Pro API to automate workflows.
en
/apple-touch-icon-57x57.png
https://www.jamf.com/resources/videos/zero-touch-packaging-and-patch-management-with-patchbot/
This is a presentation for those who are interested in using the Jamf Pro API to automate workflows. This is a presentation for those who are interested in using the Jamf Pro API to automate workflows. If you have a well set-up MDM then the biggest challenge you can be left with is keeping all the packages up to date in your MDM and your fleet properly patched. Every new application you install at enrollment or offer in Self Service just makes the task harder. In a high-security environment such as a finance company like the Suncorp Group, it is not only challenging but vital. At Suncorp I have leveraged AutoPkg, the Jamf patch management system, and Jamf API to build a total solution where almost all of my applications are automatically patched on my fleet without me touching a thing. Patch levels across the fleet have gone from woeful to good in a period of six months. I will explain the basics of AutoPkg and writing custom processors in Python for AutoPkg. I will show how I used further Python to send reports to Microsoft Teams. Finally I will show how patches are moved from test to production using Python and AutoPkg. I will also elaborate some of the lessons learnt engineering the whole system and what I plan for the future. (All the code is available on Github.)
8585
dbpedia
3
48
https://xpressrazor.wordpress.com/2020/11/04/java-programming-in-emacs/
en
Java Programming in Emacs
https://xpressrazor.word…/11/image-14.png
https://xpressrazor.word…/11/image-14.png
[ "https://xpressrazor.wordpress.com/wp-content/uploads/2015/03/cropped-banner1.jpg", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-14.png?w=800", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/screenshot_20201103_171136.png?w=810", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-10.png?w=824", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image.png?w=952", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-3.png?w=1024", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-4.png?w=1024", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-5.png?w=820", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-11.png?w=645", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-13.png?w=1024", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-6.png?w=1024", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-7.png?w=823", "https://xpressrazor.wordpress.com/wp-content/uploads/2020/11/image-9.png?w=1024", "https://lh3.googleusercontent.com/a-/AOh14Gh4rX-H6shDn64RhuuSBL0A1xNK7F4NZzHs_g6lcQ=s96-c", "https://2.gravatar.com/avatar/82656d2c698b73d5c18fcc41354571c38724318a35399d2b0d2a81226d2aed96?s=48&d=identicon&r=G", "https://i0.wp.com/pbs.twimg.com/profile_images/1360483661091532800/YL6GyuJg_normal.jpg?resize=48%2C48&ssl=1", "https://lh3.googleusercontent.com/a/AATXAJxTL4uFvq8YiUDRiCjITR_eeRNhw59GVq4j64QK=s96-c", "https://2.gravatar.com/avatar/5563773af8b2438016f5710728b6960d0f9befd5f92dc3c50df2a2e3f902223e?s=48&d=identicon&r=G", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2020-11-04T00:00:00
Introduction In this tutorial, I will go through setting up Emacs for Java development. The installation part will be fairly simple, as we will use my java specific Emacs settings. For this particular setup, I want to focus on Java Programming specific packages. Therefore, this setup does not contain other popular Emacs packages. Once you…
en
https://s1.wp.com/i/favicon.ico
xpressrazor
https://xpressrazor.wordpress.com/2020/11/04/java-programming-in-emacs/
Introduction In this tutorial, I will go through setting up Emacs for Java development. The installation part will be fairly simple, as we will use my java specific Emacs settings. For this particular setup, I want to focus on Java Programming specific packages. Therefore, this setup does not contain other popular Emacs packages. Once you are comfortable with how it works, you should be able to add your favorite package to this setting or copy settings and ideas from here to your setup without any problem. We will use Language Server Protocol (LSP) related packages and settings for this setup. You can read more about LSP here. At the end of this setup you should have an editor that has a debugger UI, code auto-completion, project initializer and several other features to help you edit and manage your code. E.g. Installation You should have emacs installed in your computer. If not, go ahead and install it first. Also, make sure you have installed java, maven and git. Backup your current emacs folders and files. If later, you don’t like this setup you can always revert back. After backing up, delete any .emacs or .emacs.d files/folders in your home directory. In a terminal window execute following commands. These commands are for unix environments. If you are on windows adjust the commands to match the directories accordingly. $ git clone https://github.com/neppramod/java_emacs.git ~/.emacs.d Note: If you want to test above configuration outside your primary setup, you can download it to a separate directory and load it using emacs -q -l init.el. However you will need to adjust EMACS_DIR variable value inside init.el file. Adjust it to where you have downloaded above setting. I prefer previous method, where you copy it to your primary emacs configuration directory. I have tested this setting in java 10 and java 14. If you have older versions of java, you may need to download a newer version of java. Once downloaded you can specify them in operating system specific files, linux.el, mac.el or windows.el. Since I also have an older version of java in my Mac, I had to specify values to two variables (JAVA_HOME and lsp-java-java-path) to point to the newer java version. If emacs complains about java version during installation of LSP please add following settings to operating system specific files, and adjust the folders accordingly. It may not be a bad idea to set them up anyway. (setenv "JAVA_HOME" "path_to_java_folder/Contents/Home/") (setq lsp-java-java-path "path_to_java_folder/Contents/Home/bin/java" After above setup, you can go ahead and start emacs. It should download all the required packages and setup emacs accordingly. If at any point, it complains about missing packages, you may have to refresh to latest package list by executing package-list-packages in emacs and restart emacs. Once the installation is finished restart emacs once anyway. Now, you should have all the packages required for java development. Emacs should look like in the screenshot below. You can switch between white and dark themes using F6 key. You can go through emacs-configuration.org and init.el to see how each packages are setup. Next, we will tackle how to use LSP to work with java projects. Working with Java Project. You can work on a simple java project with few files that has main method in it. However, here let’s setup a little bit more than that. LSP comes with lsp-java-spring-initializer command, an emacs interface for start.spring.io (spring initializer). Go ahead and execute that command. Use following setup instructions. group name: com.example artifact id: demo Description: Demo project for Spring Boot Select boot-version: Select latest snapshot Java Version: I selected 11, but you can others based on your setup Select Language: Java Select Packaging: Jar Select Package Name: com.example.demo Select type: Maven Project Select project directory: select a project directory Select dependencies: Developer Tools / Spring Boot Dev Tools (Provides fast application restarts …). I selected first one from the list At the end of the setup, emacs should ask Do you want to import the project ? Select yes. After importing the project, it should take you to the root of the project (where you will see pom.xml). If you type C-c p f you can quickly find the files within the project and select them by typing few characters. Once you open a java file within the project, lsp-java should prompt you to install the server. Go ahead and install the available jdtls server. After that it should ask you to import the project root, import it to LSP. You can stop, start, restart, disconnect a lsp server by using C-c l and the options shown below. If you see error messages while connecting to the server, you may want to shut down existing server and start the server again. Let’s create a simple java class. Create a class called Person in com.example.demo package inside src/main/java/com/example/demo directory with following code. package com.example.demo; public class Person { private String name; private String title; public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getName() { return name; } public void setName(String name) { this.name = name; } } You can also use various functionalities of LSP to create above code. E.g. Let’s say you did not have getter/setter for name. You coulduse C-c l a a to provide option to create getter/setter for the field. You could also click the option presented in the top right corner of the editor, just like in below diagram. If you select the general Getters and Setters option, you should be able to select more than one variable using Space key, and generate getters and setters for all of them. Helm helps you in selecting more than one entry from the list. Also create a unit test class called PersonTest.java inside demo/src/test/java/com/example/demo/PersonTest.java with following code package com.example.demo; import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.Test; public class PersonTest { @Test public void testMethods() { Person p = new Person(); p.setName("Monkey D. Luffy"); p.setTitle("Pirate King"); assertEquals(p.getName(), "Monkey D. Luffy"); assertEquals(p.getTitle(), "Pirate King"); } } Projectile recognizes a test file with Test prefix with the class name. If you have Person and PersonTest, you can quickly jump between each of them using C-c p t. If you are in Person class and use M-? (Alt + ?), you can see see Person’s references in the project, You can go to them easily using C-n, C-p. If you are highlighting Person type in PersonTest class, you can quickly go to the definition class (Person) using M-. You can jump to definition of methods, variables etc using this key. To go back to previous jump point using M-, If you use C-c l g a you can search any symbols in the workspace (including standard java symbols) and quickly jump to source code of that type. To run the unit test use C-c p P command (projectile-test-project) and type “mvn test”. It should build the project and run the unit tests. You should see something like in the following diagram. Note: I have changed the theme to white. If you want to run the project you can use C-c p u command (projectile-run-project) and type “mvn spring-boot:run”. Since there is nothing to run at this moment, it should start the DemoApplication and quit with Success. For the next part, let’s create a simple web response using the example in Building an Application with Spring Boot. Since we selected the first option while creating our spring project, we may not have spring-web added to our project. Add following dependency in pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> Also for test dependency you can add the exclusions as mentioned in above wiki. Test dependency should look like following. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> </exclusion> </exclusions> </dependency> Now, let’s add a simple controller called HelloController.java into the project. Create a file called HelloController.java inside the same folder where you have DemoApplication.java. We should have separated each part in separate packages, but for this tutorial we want to keep it simple. To add the package you can start by typing “package com”, emacs should complete rest of it. To add the class name, we can use yasnippets. Type “file” and press TAB. It should complete the class name for you. Let’s add an annotation called @RestController for the controller class. While you are tying above code, emacs should suggest you with the package name and when you press enter, it should add the package name to the file. Note that, if you hover over a type or method, it shows javadoc documentation. Use same technique for adding a string method that returns a message. Your class should look like the following code. package com.example.demo; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RequestMapping; @RestController public class HelloController { @RequestMapping("/") public String index() { return "Greetings from Spring Boot!"; } } If you now run the project with mvn spring-boot:run using C-c p u command launcher, you should notice the application no longer quits. If you visit http://localhost:8080/ you should see the greeting message we wrote in HelloController. In my case, I went through rest of the tutorial, and after adding actuator, I was able to see the health of the website, as in following diagram. To quit the running server, press C-c C-k in the *compilation* window (window where you see the spring messages). Debugging There are various ways to debug your application. If you want to start from main, you can invoke dap-java-debug command to start debugging. In this example though this command will run the HelloController. Let’s instead test Person class. Open PersonTest class and invoke dap-java-debug-test-class. You can also test a single method by using dap-java-debug-test-method. You should see 1 test successful message. This is not very intuitive. Let’s add a break point at line 11 of PersonTest class. This is the next line to where Person class is initialized. Now if you invoke dap-java-debug-test-method, when your cursor is within the tetMethods() method, you should see the debug UI. There should be a hydra window at the bottom showing you keys for different options. You can open up local variable list with sl key, similarly sb to see breakpoints, and you can jump through the code using n, i, o and c keys. If the hydra window disappears for some reason, you can always bring it back using M-F5 key (Alt + F5). Couple of useful commands I tend to use quite often are from Eval section. E.g. I can add an expression to watch it while I am debugging. I can use ea and add an expression from objects in the test. E.g. In below example, I added expressions like “p.getName()“, “p.getTitle()” and “person.getName()“. As you know we don’t have a person object, so that should show an error. Other two values should evaluate to null until we execute the setter lines for each of the variables. Also if you look closely in the diagram below, I selected p.getTitle() in code and used er to evaluate the selected region. Go through other options to see what they do. Conclusion This tutorial should get you to a comfortable position where you can explore more options provided by LSP, Projectile, DAP and Helm to work through your project. To make your coding easier I suggest adding various snippets to complete your code. Also remember to manage LSP servers. If you start too many servers your computer memory might go up quickly. Take a look at memory usage, and restart LSP server or emacs if necessary. To make lsp start over again, you can delete the .lsp-session* files and workspace folder (last option) inside your .emacs.d folder. Also use projectile-remove-known-project or similar projectile commands to remove projects that are no longer needed. You can always add them later. If you feel your computer is sluggish with Emacs and LSP tweak gc-cons-threshold and gc-cons-percentage values in init.el. You can also tweak read-process-output-max variable in lsp-mode setting. Java uses around 1GB memory. You can tweak lsp-java-vmargs as listed in lsp-java github configuration file as well. I hope this tutorial helped you to start using Emacs for Java development. References
8585
dbpedia
1
10
https://ask.replit.com/t/disable-automatic-pip-install/70633
en
Disable automatic pip install
https://global.discourse…31d9360acec9.png
https://global.discourse…31d9360acec9.png
[ "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://emoji.discourse-cdn.com/twitter/frowning.png?v=12", "https://avatars.discourse-cdn.com/v4/letter/n/e9c0ed/48.png", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/shaneatreplit/48/458_2.png", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/shaneatreplit/48/458_2.png" ]
[]
[]
[ "" ]
null
[]
2023-10-04T05:40:16+00:00
Hello, I’m trying to use a python package called interactions-py to make a Discord bot and when I start my Repl, it automatically installs another package for Discord with poetry. Is it possible to disable the automati&hellip;
en
https://global.discourse…7693_2_32x32.png
Replit Ask
https://ask.replit.com/t/disable-automatic-pip-install/70633
Hello, I’m trying to use a python package called interactions-py to make a Discord bot and when I start my Repl, it automatically installs another package for Discord with poetry. Is it possible to disable the automatic pip install with poetry on a Python Repl ? Thanks There’s supposed to be a toggle for it in .replit, but it doesn’t work. Here’s the workaround: Friendly reminder that this does break the Packages tool. Even though it breaks the packager, you can still use poetry directly from the Shell, which is what the packager calls under the hood. (It calls upm, which calls poetry)
8585
dbpedia
2
12
https://python-poetry.org/
en
Python dependency management and packaging made easy
https://python-poetry.or…n-origami-32.png
https://python-poetry.or…n-origami-32.png
[ "https://python-poetry.org/images/logo-origami.svg", "https://python-poetry.org/images/logo-origami.svg", "https://python-poetry.org/images/logo-origami.svg" ]
[]
[]
[ "" ]
null
[]
null
Python dependency management and packaging made easy
/images/favicon-origami-32.png
https://python-poetry.org/
Libraries This chapter will tell you how to make your library installable through Poetry. Versioning Poetry requires PEP 440-compliant versions for all projects. While Poetry does not enforce any release convention, it used to encourage the use of semantic versioning within the scope of PEP 440 and supports version constraints that are especially suitable for semver. Note As an example, 1.0.0-hotfix.1 is not compatible with PEP 440. Configuration Poetry can be configured via the config command (see more about its usage here) or directly in the config.toml file that will be automatically created when you first run that command. Repositories Poetry supports the use of PyPI and private repositories for discovery of packages as well as for publishing your projects. By default, Poetry is configured to use the PyPI repository, for package installation and publishing. So, when you add dependencies to your project, Poetry will assume they are available on PyPI. This represents most cases and will likely be enough for most users. Private Repository Example Installing from private package sources By default, Poetry discovers and installs packages from PyPI.. Dependency specification Dependencies for a project can be specified in various forms, which depend on the type of the dependency and on the optional constraints that might be needed for it to be installed. Version constraints Caret requirements Caret requirements allow SemVer compatible updates to a specified version. Plugins Poetry supports using and building plugins if you wish to alter or expand Poetry’s functionality with your own. For example if your environment poses special requirements on the behaviour of Poetry which do not apply to the majority of its users or if you wish to accomplish something with Poetry in a way that is not desired by most users. In these cases you could consider creating a plugin to handle your specific logic.. Contributing to Poetry First off, thanks for taking the time to contribute! The following is a set of guidelines for contributing to Poetry on GitHub. FAQ Why is the dependency resolution process slow? While the dependency resolver at the heart of Poetry is highly optimized and should be fast enough for most cases, with certain sets of dependencies it can take time to find a valid solution. This is due to the fact that not all libraries on PyPI have properly declared their metadata and, as such, they are not available via the PyPI JSON API..
8585
dbpedia
2
86
https://docs.veracode.com/updates/r/Veracode_CLI_Updates
en
Veracode Docs
https://docs.veracode.co…code-favicon.png
https://docs.veracode.co…code-favicon.png
[ "https://docs.veracode.com/img/Veracode_Docs_Logo_Light_Mode.svg", "https://docs.veracode.com/img/Veracode_Docs_Logo_Dark_Mode.svg" ]
[]
[]
[ "" ]
null
[]
2024-08-01T00:00:00+00:00
The updates on this page apply to the Veracode CLI. Updates that apply to specific Veracode regions show a region icon.
en
/img/veracode-favicon.png
https://docs.veracode.com/updates/r/Veracode_CLI_Updates
The updates on this page apply to the Veracode CLI. Updates that apply to specific Veracode regions show a region icon. For updates specific to Veracode Fix, such as language and CWE support, see Fix updates. This update includes the following improvements to the veracode package, veracode sbom and veracode scan commands: Streamlined the log output by removing overly detailed logs that were cluttering the console. This change improves readability and helps users focus on the most relevant information during the execution of commands. Detailed logs are still available using the --debug flag. Windows C/C++ packaging improvements, including parallel project builds, skipping non-C/C++ projects, and enhanced logging for failed builds. Improved error messaging. Updated the Syft JSON schema from version 13.0.0 to 16.0.4. Updated the CycloneDX XML schema from version 1.5 to 1.6. URL encoder is applied. This update includes the following improvements: The veracode static scan command has improved error messages that indicate whether API credentials are missing or invalid. The veracode configure command now correctly uses directory paths that contain environment variables. General performance improvements. This update includes the following improvements to the veracode package command: Support for C/C++ Windows, C/C++ Linux, COBOL, and Perl. Support for multiple target frameworks defined in the *.csproj, Directory.Packages.props, and Directory.Build.props files in .NET projects. Improved language support for .NET. This update includes the following improvements to the veracode package command: Support for .NET Framework 4.6-4.8. Support for restoring .NET projects with <TargetFramework>. Improved language support for Flutter, Android, PHP, and .NET. More meaningful --debug messages if the command is not able to locate required build or packaging tools. This update includes the following improvements The veracode package and veracode scan commands are now supported on alpine-based environments. Improved Java, Javascript, and Python language support for the veracode package command. The veracode static command now provides improved command output. The veracode static command output now only displays scannable modules once. This update adds the veracode dynamic command. You use this command to run a DAST Essentials dynamic analysis, check the status of the analysis, and output the results. The veracode package command now supports Android, React Native, Dart, and Flutter. This update includes the following improvements to the veracode package command: Support for streaming log messages. Improved language support for .NET and Python. This update includes the following improvements: The veracode package command now supports iOS, in addition to existing support for Java, JavaScript, .NET, Python, PHP, Scala, Kotlin, Go, and Ruby on Rails. You use this command to auto-package your applications for Static Analysis and Software Composition Analysis (SCA) upload scans. The packaged artifacts use a Veracode approved filename format. Improved error messaging. The veracode package command output now displays the installed CLI version. The veracode fix command now provides suggested fixes for a directory of source files, in addition to a single source file. You can fix flaws in multiple files as a batch, without having to rescan your code each time you apply a fix. The veracode package command now supports application packaging for the following languages: .NET Go Kotlin PHP Ruby on Rails Scala The Veracode CLI now includes the following commands: veracode repository add: create an Excel file that lists all accessible repositories from which to import contributing developers. veracode repository report: create a report that lists all contributing developers for each repository. The Veracode CLI now supports Static Analysis auto-packaging for Java, JavaScript, and Python. The package command removes manual packaging steps to streamline your application security tests. You can now install the Veracode CLI on Windows with a PowerShell script. You can now install the Veracode CLI on Windows with Chocolatey. Veracode Fix is now fully supported in the European Region. The veracode fix command is a new generative AI feature of the Veracode CLI. It uses the results from a Veracode Pipeline Scan to generate suggested code fixes that you can apply to flaws in your application source code. This feature is currently only available in the Commercial Region. To get started, see the quickstart. Veracode Container Security is available. Container Security is a feature of the Veracode CLI that does the following: Scans for container vulnerabilities Scans for infrastructure as code misconfigurations Scans for improperly stored secrets Helps developers secure their cloud native applications For more information about Veracode Container Security, contact your Veracode account representative.
8585
dbpedia
3
25
https://macintoshguy.wordpress.com/tag/autopkg/
en
The Macintosh Guy
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://macintoshguy.wordpress.com/wp-content/uploads/2020/06/1599px-inside_the_factory.jpg?w=1024", "https://i0.wp.com/www.linkedin.com/img/webpromo/btn_viewmy_160x33.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2020-07-10T09:46:18+10:00
Posts about autopkg written by Honestpuck
en
https://s1.wp.com/i/favicon.ico
The Macintosh Guy
https://macintoshguy.wordpress.com/tag/autopkg/
Here is my fourth post about PatchBot. In the first post I gave a short summary of how the system works and introduced JPCImporter, the first AutoPkg custom processor. In the second post I introduced patch management and the second custom processor. In the third post I showed the third custom processor and the code to run it at the right time. In the first three blog posts I explained (in great detail) how my system, PatchBot, works. Today I am going to cover how to take the pieces and put them together into a complete system. Continue reading → After my last post Graham Pugh mentioned that the AutoPkg repository list is stored in the AutoPkg preference file as RECIPE_REPOS with the search order in RECIPE_SEARCH_DIRS. He suggested doing a while loop on the defaults read output but I thought it was just fiddly enough a task in the shell that I might resort to a few lines of Python, so here it is, a Python script to dump out your repository list in search order. Tiny but it does the job. (Thanks to Graham for taking the time to comment on the previous post, it was just what I needed to get me to spend the few minutes doing this.) #!/usr/bin/env python3 # repos.py # print the list of AutoPkg repos in search order # NOTE: Totally lacking in any error checking or handling import plistlib from os import path plist = path.expanduser('~/Library/Preferences/com.github.autopkg.plist') fp = open(plist, 'rb') prefs = plistlib.load(fp) search = prefs['RECIPE_SEARCH_DIRS'] repos = prefs['RECIPE_REPOS'] # start at 3 to skip the built in ones for i in range(3, len(search)): print(repos[search[i]]['URL']) Here’s a little one for you. I needed to keep the recipe repositories in sync across two machines. AutoPkg will happily give you a repo-list- here’s part of mine: autopkg repo-list /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.48kRAM-recipes (https://github.com/autopkg/48kRAM-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.HobbitHardcase-recipes (https://github.com/autopkg/HobbitHardcase-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.MichalMMac-recipes (https://github.com/autopkg/MichalMMac-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.adobe-ccp-recipes (https://github.com/autopkg/adobe-ccp-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.arubdesu-recipes (https://github.com/autopkg/arubdesu-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.aysiu-recipes (https://github.com/autopkg/aysiu-recipes) Unfortunately that isn’t in a form you can feed to AutoPkg’s repo-add command. We need something like sed to make it right. Here we go. autopkg repo-list | sed "s#[^(]*(\([^)]*\)).*#\1#" https://github.com/autopkg/48kRAM-recipes https://github.com/autopkg/HobbitHardcase-recipes https://github.com/autopkg/MichalMMac-recipes https://github.com/autopkg/adobe-ccp-recipes https://github.com/autopkg/arubdesu-recipes https://github.com/autopkg/aysiu-recipes Now we add them on the other computer. Pipe the above into repos.txt and then: while read -r line ; do autopkg repo-add $line done < repos.txt Now if AutoPkg had an option to list the repositories in search order rather than alphabetical… Over the weekend I was feeling a little bored so I decided to try my hand at writing a shell script to add custom completion for autopkg to bash. (tl;dr – the script is on GitHub.) I found an example for the zsh shell which lacked a couple of features and I spent some time examining the script for brew so I wasn’t totally in the dark. There are a number of tutorials available for writing them but none are particularly detailed so that wasn’t much help. Writing Shell Scripts The first thing I should say is that I find writing shell scripts totally different to writing for any other language. I probably write shell scripts incredibly old school, shell and C were the two languages I was paid to write way back in the 1980’s. It feels like coming home. Continue reading →
8585
dbpedia
0
27
https://www.freecodecamp.org/news/build-your-first-python-package/
en
How to Build Your Very First Python Package
https://cdn-media-2.free…69d1a4ca17ef.jpg
https://cdn-media-2.free…69d1a4ca17ef.jpg
[ "https://cdn.freecodecamp.org/platform/universal/fcc_primary.svg", "https://www.freecodecamp.org/news/content/images/size/w60/2021/10/Jason-FCC-Photo-1.png 60w", "https://cdn-media-2.freecodecamp.org/w1280/5f9c980f740569d1a4ca17ef.jpg", "https://www.freecodecamp.org/news/content/images/size/w60/2021/10/Jason-FCC-Photo-1.png 60w", "https://cdn.freecodecamp.org/platform/universal/apple-store-badge.svg", "https://cdn.freecodecamp.org/platform/universal/google-play-badge.svg" ]
[]
[]
[ "" ]
null
[ "Jason Dsouza" ]
2020-10-27T23:43:25+00:00
A few months ago, I decided to release Caer [http://github.com/jasmcaus/caer], a Computer Vision package available in Python. I found the process to be excruciatingly painful. You can probably guess why  — little (and confusing) documentation, lack of good tutorials, and so on. So I decided to write this article in the
en
https://cdn.freecodecamp.org/universal/favicons/favicon.ico
freeCodeCamp.org
https://www.freecodecamp.org/news/build-your-first-python-package/
A few months ago, I decided to release Caer, a Computer Vision package available in Python. I found the process to be excruciatingly painful. You can probably guess why — little (and confusing) documentation, lack of good tutorials, and so on. So I decided to write this article in the hope that it'll help people who are struggling to do this. We’re going to build a very simple module and make it available to anyone around the world. The contents of this module follow a very basic structure. There are, in total, four Python files, each of which has a single method within it. We’re going to keep this real simple for now. base-verysimplemodule --> Base └── verysimplemodule --> Actual Module ├── extras │ ├── multiply.py │ ├── divide.py ├── add.py ├── subtract.py You will notice that I have a folder called verysimplemodule which, in turn, has two Python files add.py and subtract.py. There is also a folder called extras (which contains multiply.py and divide.py). This folder will form the basis of our Python module. Bringing out the __init__s Something that you’ll always find in every Python package is an __init__.py file. This file will tell Python to treat directories as modules (or sub-modules). Very simply, it will hold the names of all the methods in all the Python files that are in its immediate directory. A typical __init__.py file has the following format: from file import method # 'method' is a function that is present in a file called 'file.py' When building packages in Python, you are required to add an __init__.py file in every sub-directory in your package. These sub-directories are the sub-modules of your package. For our case, we’ll add our __init__.py files to the ‘actual module’ directory verysimplemodule, like this: from add import add from subtract import subtract and we’re going to do the same for the extras folder, like this: from multiply import multiply from divide import divide Once that’s done, we’re pretty much halfway through the process! How to set up setup.py Within the base-verysimplemodule folder (and in the same directory as our module verysimplemodule ), we need to add a setup.py file. This file is essential if you intend to build the actual module in question. Note: Feel free to name the setup.py file as you wish. This file is not name-specific as our __init__.py file is. Possible name choices are setup_my_very_awesome_python_package.py and python_package_setup.py , but it’s usually best practice to stick with setup.py. The setup.py file will contain information about your package, specifically the name of the package, its version, platform-dependencies and a whole lot more. For our purposes, we’re not going to require advanced meta information, so the following code should suit most packages you build: from setuptools import setup, find_packages VERSION = '0.0.1' DESCRIPTION = 'My first Python package' LONG_DESCRIPTION = 'My first Python package with a slightly longer description' # Setting up setup( # the name must match the folder name 'verysimplemodule' name="verysimplemodule", version=VERSION, author="Jason Dsouza", author_email="<youremail@email.com>", description=DESCRIPTION, long_description=LONG_DESCRIPTION, packages=find_packages(), install_requires=[], # add any additional packages that # needs to be installed along with your package. Eg: 'caer' keywords=['python', 'first package'], classifiers= [ "Development Status :: 3 - Alpha", "Intended Audience :: Education", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", ] ) With that done, all we have to do next is run the following command in the same directory as base-verysimplemodule: python setup.py sdist bdist_wheel This will build all the necessary packages that Python will require. The sdist and bdist_wheel commands will create a source distribution and a wheel that you can later upload to PyPi. PyPi — here we come! PyPi is the official Python repository where all Python packages are stored. You can think of it as the Github for Python Packages. To make your Python package available to people around the world, you’ll need to have an account with PyPi. With that done, we’re all set to upload our package on PyPi. Remember the source distribution and wheel that were built when we ran python setup.py ? Well, those are what will actually be uploaded to PyPi. But before you do that, you need to install twine if you don’t already have it installed. It’s as simple as pip install twine. How to upload your package to PyPi Assuming you have twine installed, go ahead and run: twine upload dist/* This command will upload the contents of the dist folder that was automatically generated when we ran python setup.py. You will get a prompt asking you for your PyPi username and password, so go ahead and type those in. Now, if you’ve followed this tutorial to the T, you might get an error along the lines of repository already exists. This is usually because there is a name clash between the name of your package and a package that already exists. In other words, change the name of your package — somebody else has already taken that name. And that’s it! To proudly pip install your module, fire up a terminal and run: pip install <package_name> # in our case, this is pip install verysimplemodule Watch how Python neatly installs your package from the binaries that were generated earlier. Open up a Python interactive shell and try importing your package: >> import verysimplemodule as vsm >> vsm.add(2,5) 7 >> vsm.subtract(5,4) 1 To access the division and multiplication methods (remember that they were in a folder called extras ?), run: >> import verysimplemodule as vsm >> vsm.extras.divide(4,2) 2 >> vsm.extras.multiple(5,3) 15 It’s as simple as that. Congratulations! You’ve just built your first Python package. Albeit very simple, your package is now available to be downloaded by anyone around the world (so long as they have Python, of course). What's next? Test PyPi The package that we used in this tutorial was an extremely simple module — basic mathematical operations of addition, subtraction, multiplication and division. It doesn’t make sense to upload them directly to PyPi especially since you’re trying this out for the first time. Lucky for us, there is Test PyPi, a separate instance of PyPi where you can test out and experiment on your package (you will need to sign up for a separate account on the platform). The process that you follow to upload to Test PyPi is pretty much the same with a few minor changes. # The following command will upload the package to Test PyPi # You will be asked to provide your Test PyPi credentials twine upload --repository testpypi dist/* To download projects from Test PyPi: pip install --index-url "https://test.pypi.org/simple/<package_name>" Advanced Meta Information The meta information we used in the setup.py file was very basic. You can add additional information such as multiple maintainers (if any), author email, license information and a whole host of other data. This article will prove particularly helpful if you intend to do so. Look at other repositories Looking at how other repositories have built their packages can prove to be super useful to you.
8585
dbpedia
3
7
https://www.osnews.com/story/6102/autopackage-04-released/
en
Autopackage 0.4 Released – OSnews
https://www.osnews.com/i…avicon-32x32.png
https://www.osnews.com/i…avicon-32x32.png
[ "https://www.osnews.com/wp-content/uploads/2022/02/osnews-ukraine.png", "https://secure.gravatar.com/avatar/cb876ec005278bf5b01ecf1b62624bc0?s=72&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/3e9a84bec63903f359d6751153f70510?s=68&d=identicon&r=r", "https://www.osnews.com/images/emo/smile.gif", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/a7fe7325dc56d485c9693901caa4602f?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/eeb7c5d8075d578d7bc64af943a53032?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/400fff2d908e0311c04342d4d0da25f9?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/76daba515445ee1389de2e6e6fae3f44?s=68&d=identicon&r=r" ]
[]
[]
[ "" ]
null
[]
null
en
/icons/apple-touch-icon.png
https://www.osnews.com/story/6102/autopackage-04-released/
8585
dbpedia
0
31
https://osxdominion.wordpress.com/tag/python/
en
OS X Dominion: Mastering OS X Management
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[ "Nick McSpadden" ]
2015-11-02T10:58:51-08:00
Posts about python written by Nick McSpadden
en
https://s1.wp.com/i/favicon.ico
OS X Dominion: Mastering OS X Management
https://osxdominion.wordpress.com/tag/python/
AutoPkg Wrapper Scripts There are myriad AutoPkg wrapper scripts/tools available out there: [Sean Kaiser’s AutoPkg Change Notifications script](http://seankaiser.com/blog/2013/12/16/autopkg-change-notifications/) [The Linde Group’s GUI-based AutoPkgr](http://www.lindegroup.com/autopkgr) [Allister Banks’ Jenkins-based script](https://www.afp548.com/2015/05/22/one-way-to-be-autopkging-the-jenkins/) [Ben Goodstein’s bash script](https://gist.github.com/fuzzylogiq/a2b922b41aa7d320dfc1) And several more… They all serve the same basic goal – run AutoPkg with a selection of recipes, and trigger some sort of notification / email / alert when an import succeeds, and when a recipe fails. This way, admins can know when something important has happened and make any appropriate changes to their deployment mechanism to incorporate new software. Everything Goes In Git Facebook is, unsurprisingly, big on software development. As such, Facebook has a strong need for source control in all things, so that code and changes can always be identified, reviewed, tested, and if necessary, reverted. Source control is an extremely powerful tool for managing differential changes among flat text files – which is essentially what AutoPkg is. Munki also benefits strongly, as all of Munki configuration is based solely on flat XML-based files. Pkginfo files, catalogs, and manifests all benefit from source control, as any changes made to the Munki repo will involve differential changes in (typically) small batches of lines relative to the overall sizes of the catalogs. Obvious note: binaries packages / files have a more awkward relationship with git and source control in general. Although it’s out of the scope of this blog post, I recommend reading up on Allister Banks’ article on git-fat on AFP548 and how to incorporate large binary files into a git repo. Git + Munki At Facebook, the entire Munki repo exists in git. When modifications are made or new packages are imported, someone on the Client Platform Engineering team makes the changes, and then puts up a differential commit for team review. Another member of the team must then review the changes, and approve. This way, nothing gets into the Munki repo that at least two people haven’t looked at. Since it’s all based on git, merging changes from separate engineers is relatively straightforward, and issuing reverts on individual packages can be done in a flash. AutoPkg + Munki AutoPkg itself already has a great relationship with git – generally all recipes are repos on GitHub, most within the AutoPkg GitHub organization, and adding a new repo generally amounts to a git clone. My initial attempts to incorporate AutoPkg repos into a separate git repo were a bit awkward. “Git repo within a git repo” is a rather nasty rabbit hole to go down, and once you get into git submodules you can see the fabric of reality tearing and the nightmares at the edge of existence beginning to leak in. Although submodules are a really neat tactic, regulating the updating of a git repo within a git repo and successfully keeping this going on several end point machines quickly became too much work for too little benefit. We really want to make sure that AutoPkg recipes we’re running are being properly source controlled. We need to be 100% certain that when we run a recipe, we know exactly what URL it’s pulling packages from and what’s happening to that package before it gets into our repo. We need to be able to track changes in recipes so that we can be alerted if a URL changes, or if more files are suddenly copied in, or any other unexpected developments occur. This step is easily done by rsyncing the various recipe repos into git, but this has the obvious downside of adding a ton of stuff to the repo that we may not ever use. The Goal The size and shape of the problem is clear: We want to put only recipes that we care about into our repo. We want to automate the updating of the recipes we care about. We want code review for changes to the Munki repo, so each package should be a separate git commit. We want to be alerted when an AutoPkg recipe successfully imports something into Munki. We want to be alerted if a recipe fails for any reason (typically due to a bad URL). We really don’t want to do any of this by hand. autopkg_runner.py Facebook’s Client Platform Engineering team has authored a Python script that performs these tasks: autopkg_runner.py. The Setup In order to make use of this script, AutoPkg needs to be configured slightly differently than usual. The RECIPE_REPO_DIR key should be the path to where all the AutoPkg git repos are stored (when added via autopkg add). The RECIPE_SEARCH_DIRS preference key should be reconfigured. Normally, it’s an array of all the git repos that are added with autopkg add (in addition to other built-in search paths). In this context, the RECIPE_SEARCH_DIRS key is going to be used to contain only two items – ‘.’ (the local directory), and a path to a directory inside your git repo that all recipes will be copied to (with rsync, specifically). As described earlier, this allows any changes in recipes to be incorporated into git differentials and put up for code review. Although not necessary for operation, I also recommend that RECIPE_OVERRIDE_DIRS be inside a git repo as well, so that overrides can also be tracked with source control. The entire Munki repo should also be within a git repo, obviously, in order to make use of source control for managing Munki imports. Notifications In the public form of this script, the create_task() function is empty. This can be populated with any kind of notification system you want – such as sending an email, generating an OS X notification to Notification Center (such as Terminal Notifier or Yo), filing a ticket with your ticketing / helpdesk system, etc. If run as is, no notifications of any kind will be generated. You’ll have to write some code to perform this task (or track me down in Slack or at a conference and badger me into doing it). What It Does The script has a list of recipes to execute inside (at line 33). These recipes are parsed for a list of parents, and all parent recipes necessary for executing these are then copied into the RECIPE_REPO_DIR from the AutoPkg preferences plist. This section is where you’ll want to put in the recipes that you want to run. Each recipe in the list is then run in sequence, and catalogs are made each time. This allows each recipe to create a full working git commit that can be added to the Munki git repo without requiring any other intervention (obviously into a testing catalog only, unless you shout “YOLO” first). Each recipe saves a report plist. This plist is parsed after each autopkg run to determine if any Munki imports were made, or if any recipes failed. The function create_task() is called to send the actual notification. If any Munki imports were made, the script will automatically change directory to the Munki repo, and create a git feature branch for that update – named after the item and the version that was imported. The changes that were made (the package, the pkginfo, and the changes to the catalogs) are put into a git commit. Finally, the current branch is switched back to the Master branch, so that each commit is standalone and not dependent on other commits to land in sequence. NOTE: the commits are NOT automatically pushed to git. Manual intervention is still necessary to push the commit to a git repo, as Facebook has a different internal workflow for doing this. An enterprising Python coder could easily add that functionality in, if so desired. Execution & Automation At this point, executing the script is simple. However, in most contexts, some automation may be desired. A straightforward launch daemon to run this script nightly could be used: Some Caveats on Automation Automation is great, and I’m a big fan of it. However, with any automated system, it’s important to fully understand the implications of each workflow. With this particular workflow, there’s a specific issue that might arise based on timing. Since each item imported into Munki via AutoPkg is a separate feature branch, that means that the catalog technically hasn’t changed when you run the .munki recipe against the Master branch. If you run this recipe twice in a row, AutoPkg will try to re-import the packages again, because the Master branch hasn’t incorporated your changes yet. In other words, you probably won’t want to run this script until your git commits are pushed into Master. This could be a potential timing issue if you are running this script on a constant time schedule and don’t get an opportunity to push the changes into master before the next iteration. I Feel Powerful Today, Give Me More If you are seeking even more automation (and feel up to doing some Python), you could add in a git push to make these changes happen right away. If you are only adding in items to a testing catalog with limited and known distribution, this may be reasonably safe way to keep track of all Munki changes in source control without requiring human intervention. Such a change would be easy to implement, since there’s already a helper function to run git commands – git_run(). Here’s some sample code that could incorporate a git push, which involves making some minor changes to the end of create_commit(): Conclusions Ultimately, the goal here is to remove manual work from a repetitive process, without giving up any control or the ability to isolate changes. Incorporating Munki and AutoPkg into source control is a very strong way of adding safety, sanity, and accountability to the Mac infrastructure. Although this blog post bases it entirely around git, you could accommodate a similar workflow to Mercurial, SVN, etc. The full take-away from this is to be mindful of the state of your data at all times. With source control, it’s easier to manage multiple people working on your repo, and it’s (relatively) easy to fix a mistake before it becomes a catastrophe. Source control has the added benefit of acting as an ersatz backup of sorts, where it becomes much easier to reconstitute your repo in case of disaster because you now have a record for what the state of the repo was at any given point in its history.
8585
dbpedia
2
28
https://www.geeksforgeeks.org/making-automatic-module-installer-in-python/
en
Making automatic module installer in Python
https://media.geeksforge…_200x200-min.png
https://media.geeksforge…_200x200-min.png
[ "https://media.geeksforgeeks.org/gfg-gg-logo.svg", "https://media.geeksforgeeks.org/auth-dashboard-uploads/Google-news.svg", "https://media.geeksforgeeks.org/wp-content/uploads/20201214212626/Screenshot604.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/Google-news.svg", "https://media.geeksforgeeks.org/auth-dashboard-uploads/new-premium-rbanner-us.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/gfgFooterLogo.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/googleplay.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/appstore.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/suggestChangeIcon.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/createImprovementIcon.png" ]
[]
[]
[ "Data Structures", "Algorithms", "Python", "Java", "C", "C++", "JavaScript", "Android Development", "SQL", "Data Science", "Machine Learning", "PHP", "Web Development", "System Design", "Tutorial", "Technical Blogs", "Interview Experience", "Interview Preparation", "Programming", "Competitive Programming", "Jobs", "Coding Contests", "GATE CSE", "HTML", "CSS", "React", "NodeJS", "Placement", "Aptitude", "Quiz", "Computer Science", "Programming Examples", "GeeksforGeeks Courses", "Puzzles", "SSC", "Banking", "UPSC", "Commerce", "Finance", "CBSE", "School", "k12", "General Knowledge", "News", "Mathematics", "Exams" ]
null
[ "GeeksforGeeks" ]
2020-12-25T12:31:47
A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
en
https://media.geeksforge…/gfg_favicon.png
GeeksforGeeks
https://www.geeksforgeeks.org/making-automatic-module-installer-in-python/
Prerequisites: urllib subprocess Many times the task of installing a module not already available in built-in Python can seem frustrating. This article focuses on removing the task of opening command-line interface and type the pip install module name to download some python modules. In this article, we will try to automate this process. Approach: Importing subprocess module to simulate command line using python Importing urllib.request for implementing internet checking facility. Input module name using input() function and initialize module_name variable. Send module name to the main function. In the main function, we first update our pip version directly through python for the smooth functioning of our app. In the next line p= subprocess.run(‘pip3 install ‘+module_name) we write pip3 install module_name virtually into the command line. Based on the combination of return code(p) of the above statement and return value of connect() function we can assume things mentioned below. Based on the above table we can give the desired output. Function used: The connect() function is used for the following purposes: To check whether internet is on or off. Reach a specific URL using urllib.request.urlopen(host) command. If reached successfully, return True Else if not reached i.e internet is off, return false. Given below is the implementation to achieve our functionality using the above approach. Example: Output:
8585
dbpedia
2
90
https://discourse.mcneel.com/t/what-are-we-paying-for-frustrations-with-rhino-vs-open-source/152921
en
What are we paying for? Frustrations with Rhino vs. Open Source
https://global.discourse…4e63abbc920c.png
https://global.discourse…4e63abbc920c.png
[ "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/wiebe_r/48/416806_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/wiebe_r/48/416806_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/wiebe_r/48/416806_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/jimcarruthers/48/504725_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/lewnworx/48/478539_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/keithscadservices/48/599702_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/dale/48/500088_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/keithscadservices/48/599702_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/stevebaer/48/473567_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/wiebe_r/48/416806_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/wiebe_r/48/416806_2.png", "https://sea1.discourse-cdn.com/mcneel/user_avatar/discourse.mcneel.com/wiebe_r/48/416806_2.png" ]
[]
[]
[ "mac" ]
null
[ "wiebe_R (wiebe_R)", "keithscadservices (Keithscadservices)", "miano (Miano)", "nathanletwory (Nathan 'jesterKing' Letwory)", "dale (Dale Fugier)", "stevebaer (Steve Baer)" ]
2023-01-05T23:03:23+00:00
I’ve been an avid Rhino user for several years now, having used it on both Windows and now primarily on Mac. Lately, though, I’m getting increasingly frustrated by the many long-standing issues that seem like they should&hellip;
en
https://global.discourse-cdn.com/mcneel/uploads/default/optimized/3X/e/8/e8ac746b7d3ee947a397b6e4f08d03f650423c3c_2_32x32.ico
McNeel Forum
https://discourse.mcneel.com/t/what-are-we-paying-for-frustrations-with-rhino-vs-open-source/152921
I’ve been an avid Rhino user for several years now, having used it on both Windows and now primarily on Mac. Lately, though, I’m getting increasingly frustrated by the many long-standing issues that seem like they should have been fixed long ago. And having recently gotten into using Blender, I’m even more frustrated that a $1000 software with an additional charge for each upgrade seems to be so far behind a free and open source project. The differences between the Mac and Windows version are mind-boggling to me, and frankly it verges on a rip-off that Mac users pay the same yet receive a distinctly less-powerful and less user-friendly piece of software. The reason I felt compelled to write this was discovering today that editing toolbars and the pop-up menu is not possible in Mac, which in my opinion severely cripples the ability to increase productivity with customization on the Mac version. Many other gripes are summed up nicely here, such as Osnap inconsistencies, and of course the relatively long list of commands that still are not available on Mac, with no explanation as to why they cannot/have not been ported. The Mac vs. Windows feature comparison page is an absolute joke, none of the tangible differences are indicated here, only a vague indication that plugins and developer tools are less robust and that the niche function of worksessions isn’t available. This feels particularly insidious, as an unsuspecting person would likely conclude that the Mac version is virtually the same and proceed to buy it after viewing that page, only to discover its limitations after they have committed to the Mac ecosystem. What is promising is that in the forum topic I linked previously there is an indication that these things will be remedied in Rhino 8, however many of them seem like they should never have been issues in the first place. John Brock said in the linked forum post: The Mac and Windows U/I are different by design. It was decided long ago for Mac users would never use a Windows application, or a Windows application that just ran on the Mac, it would be a bad decision. This turned out to be largely true. Since then, there seems to be some push back from people like yourself. Who decided this? Was there any say by paying customers? By what metric can you conclude it turned out to be largely true? There are plenty of applications that are “windows applications that just ran on the Mac”, the entire Microsoft office suite is a great example and it is still the standard for office work whether you use Windows or a Mac, with very little difference in UI beyond the typical placing of drop down menus in the system menubar on Mac. The fixes within Rhino 8 also cannot come fast enough. The development pace of Rhino seems incredibly sluggish compared to Blender. Blender managed to entirely rewrite it’s core render engine and provide support for Metal and M1 macs within 2 years of M1 being announced. When I first downloaded blender in early 2021, it did not support Metal GPU rendering on my AMD5500M, when I revisited it in mid 2022 I was greeted with an entirely overhauled render engine that fully functioned with Metal out of the box, not to mention an incredibly powerful node system that is applied to almost every aspect of Blender, and many other new features. Rhino 8 with M1 support (and supposed UI unification) is still WIP, and from what I can tell it is still solidly in the WIP phase, 2 years after it began. I’m no longer a student, so if I want a fully functioning Mac product I will likely have to pony up $600. The starkness of these differences combined with Rhino now using Cycles (in an incredibly unintuitive and slow implementation) for some rendering is laughably ironic. Of course Cycles is open-source and it is important that anyone can use open-source software regardless of corporate status, but it definitely doesn’t feel good to know that the state of Rhino development is such that my money is going towards a crippled implementation of a render engine that in it’s native (free) environment is much more powerful. I sure hope McNeel contributes to the Blender Development Fund, but unless their contributions are hidden it does not look like they do. This all looks like a lot of complaining, because it is, but the reason I’m saying any of this at all is because I think Rhino is one of the best softwares for 3D modelling, nothing can really compare to the full breadth of work that can be down with it, especially if you include grasshopper. But there are some pretty major sore spots that, in my opinion, but Rhino at risk of falling far behind other software, both paid and free, and I don’t want that to happen. I am of course, not a developer, nor do I have anywhere near the amount of experience in 3D software as the people at McNeel. I’m totally open to having my naivete pointed out, and I’m curious what perspectives other people have on the state of Rhino development, regarding the Mac version or otherwise. The comparison to Blender isn’t entirely fair, as Blender has a much larger user base. This is where open-source gets it’s strength though. I’m also aware the Rhino is one of the few professional software packages that is still on a one-time purchase scheme, as opposed to a subscription service. I appreciate that but it seems like that puts it in an odd niche between the widespread development and financial support of an open-source project, and the high-end support of a professional product funded by thousands of dollars a year in subscriptions. Rhino going open-source would be a dream come true in my opinion, but I’m not imagining that that is a remote possibility. My hope is that the long-term strategy for Rhino takes some notes from open-source development, including platform-agnosticism, increased transparency in decision making, a more streamlined and rapid development cycle, and further inclusion of what is indeed a vibrant community of users and developers. You obviously haven’t spent any appreciable time in Autocad for window vs mac. Now autodesk uses the exact licensing model you champion, charge a ludicrous amount for either platform with show stopping fatal bugs, some of which have been there for so long they border on being of legal drinking age in all 50 states. Until you’ve attempted to do any relatively simple Boolean operations in AutoCad, then have left your desk, got lunch, read a decent chunk of War and Peace, and did your taxes while waiting for it to complete only to have the memory leak from hell surface and lock up your entire machine to the point where a 3 finger salute won’t fix it and then have to cold power your box (and as a result losing whatever work you might have had open in other apps in the process), you really can’t appreciate the relative solidity of the Rhino platform. Hell Rhino’s first beta test release was infinitely more solid than any Autocad “Release” I’ve ever dealt with. About the only feature in AC that is fairly solid is it’s crash recovery, probably because that’s it’s most used feature. I recently spent 4 years at a job where the only platform that the company supported was AC and those were the most miserable years of my career. Thank God for Covid layoffs, as they did me a huge service. As one who’s had to use both for a good decade now on both platforms I’d dare say the UI differences in Rhino between the platforms, while valid, aren’t nearly as miserable is bouncing back and forth between AC Mac and Windows. At least when I do that in Rhino I know that what work I have will be present in either platform, with no hidden “goodies” like in AC where for no particular reason opening a windows AC file in Mac will occasionally scramble your layers and the objects on them as well as the entire layer definition beyond recognition. Or the “whoops we reset your scale definition at the file level and applied it as object attributes which now requires you to manually select each item, one at a time and fix the scale” bug (been there for oh about 4 years now). Or how about the “you didn’t really expect us to keep track of your x-ref’s relative states when you moved that set of nested directories onto a shared network drive did you” which will break entire document sets and require you to spend hours fixing them. And that bug’s been there oh, for about 10 years now. And in exchange for those and innumerable other goodies in the AC platform you get to pay an absolutely absurd annual “subscription with support” fee, and that “support’ comes with it uh…… Nothing. You can report bugs till the cows come home and hear crickets, or at best they’ll point you to some bit of embedded online help that hasn’t had it’s dialog pictures updated since god only knows what ancient rev when that feature was introduced. If you can’t figure out how something works? The documentation is more or less useless. Best hope that somebody in a non AC forum has answered it, because if you raise it in the AD forum you might be on social security before anybody answers, if at all. The relative arrogance of the Autocad developers is the stuff of legend. (And to be fair that seems to be an AC only thing. The Fusion360 and Inventor folks are actually pretty responsive). As far as feature discrepancy between platforms? Sure thats there, and are there gaps? Yeah. However as the McKneel folks have pointed out, some of those are based in stuff thats got a host OS dependency, which sucks if you’ve done any software development work at all, just keeping up with Microsoft’s “New Technology De Jour which we won’t actually fix any bugs in but replace it with another New Technology De Jour in a couple years”. Same thing but in a different way in Apple Land. This makes it really difficult to implement certain things reliably, let alone in a cross platform fashion. Now I’ve been using Cad and 3D software probably since before you were born, and have ran the gauntlet of these things. I”ve used more 3d modeling packages that have long since vanished from the ecosphere than I can count. I’ve beta and in some cases alpha tested a number of these. And having done a fair bit of software development work over the years, I can say this much. CAD and 3D applications are quite possibly one of the most challenging types of software to create. The required skill sets go well beyond those that any other application development requires. You can’t just know how to code for a given platform, you have to have an innate understanding of very complex math, be able to do matrix and vector math in your head and juggle notions of terms that most people have never heard of (I guaranty that if you use the term Quaternion in a sentence in a public gathering, most present will think you’re from some other planet). So the folks that do this work are pretty special. And at most other places, these devs are essentially kept behind locked doors that no one has access to. Getting someone at one of these places to answer a question is pretty much unbotainium. And over the years? I’ve never EVER seen a development team that’s a fraction as approachable as the McNeel folks are. You sure as hell won’t find any of the developers hanging out in the user forums, answering questions, tossing up example files of how to do specific things, and integrating to the degree they do here with any other platform I’ve ever seen or used, including many of the Uber big players like SolidWorks, ProE and Catia, and any one of those platforms annual subscription costs orders of magnitude more than the $700 price you are wingeing about. Being a Mac user by choice, I’m just tickled pink that McN first introduced a Mac version (back at V5) at a time when most software outfits were abandoning the platform altogether. After years of fighting with AC mac (which in those days was more or less utterly unusable for anything vaguely approximating production work) and having nursed along the final build of EI’s Modeler after it went defunct, having an actual workable nurbs CAD platform on the Mac that actually worked and didn’t cost 5 digits to license was an absolute godsend. And about “blender”. Sure it’s open source Sure it’s free. Sure it’s a lot of things. And I’ve used it, specifically for rendering, until Apple nuked support for NVidia’s CUDA drivers and that was the end of that. And this was long before Pascal and crew had the spiffy little import export module and you had to bring stuff in as OBJ’s and spend a friggin eternity fixing orientations, and a raft of other crap just to get a render out. Is cycles a decent renderer? Yah, sorta. Can you get good results with it? Yeah if you spend a ton of time learning the quirks and all that. Is it the be all and end all? No, not even close. And FWIW, Blender has a UI and underlying methodology that only a serious masochist would pick as a tool of choice in a professional environment. You sure dont’ see any major production houses using it for any real work, and there’s a good reason for that. I get that there’s the whole sub culture of folks out there doing all kinds of clever stuff in blender. And I’ve actually got some decent stuff out of it, but at a cost in time and “farting around factor” that a real production house wouldn’t sit still for. Now granted, these days I don’t even do anim and rendering work as the vast majority of my Cad work these days is either production drawings or stuff that gets exported for FDM production. I can’t speak to the M1 issue as I don’t have an M1 yet specifically because I know Rhino’s not quite there yet. I’ll give you that it’d be nice if it was, but it’s not. And bear in mind the vast majority of apps that DO run on a M1 are running under Rosetta and their performance is acceptable simply because the aren’t doing what Rhino does. Word and Excel? Not exactly challenging apps from a CPU and GPU standpoint. Is there stuff that could be improved? Sure. Some notion of drawing sets would be nice, and a full implementation of Autocad’s dynamic blocks would allow me to kiss that god awful nightmare of a platform good bye forever. However I’m a pragmatist. I know I’m not the only one with wish lists. I’m also old enough to realize my particular wish list may not be the same as others, let alone in the majority. But as you’re apparently fresh outta college, and haven’t actually had to live in the real world where we gotta get stuff done day in and day out to meet real world deadlines and when there’s some aspect in your tool that you rely on to meet said deadlines with, that either you haven’t used that much or is responding different than what you’d expect, I’ll take the comparatively nearly instantly responsive folks at McNeel over the extended middle finger I might (or more often might not) get back get from AutoDesk a few weeks if not months later (at 2-3x the price each and every year) any day. True, Blender is a bit of an outlier. But I don’t think I implied that Blender is ‘sticking it to the man’, but I also don’t think it is anywhere near the territory of Adobe. Apple being a patron-level supporter of the Blender Foundation is hardly going to increase the market share of the Mac at all, let alone get it to levels that could be considered monopolization. Blender is never going to be Mac-specific, so the relatively (for Apple) small amount of money that Apple forks over to Blender provides a relatively higher value to individuals with Macs than it does Apple. That’s pretty different than Adobe sitting on their laurels, charging ludicrous prices, and leaving their software riddled with bugs and performance issues because they have near-complete monopoly. I would say you need to be just as much of a pro, if not more, to model efficiently in Rhino. You need to create your own list of keystrokes mapped to commands and memorize those if you don’t want to stop and type everything into the command line every 10 seconds. It’s also true that Blender and Rhino fill different niches, I’m not comparing the software features directly here, just the overall level of development and philosophy. You’d have to be a serious masochist to want to do any serious rendering/texturing work with Rhino as well…even with the new render engine in V7. Blender has changed a lot, if you haven’t taken a look in awhile I would say it’s worth it to revisit. In my opinion the UI is very intuitive once you understand the philosophy behind it. Everything is a series of windows that can be set to virtually any tool panel, editor, or viewport, and they can be easily split vertically or horizontally. There are several default workspaces that are just modifications of this window system, for instance the shading workspace splits the default workspace into 4 panels, showing you a 3D viewport, a shader editor, a file manager, and an image editor. The UV editing workspace splits it into a 3D viewport and a UV editor. There isn’t anything baked into these workspaces, they could all be created by hand if you wanted. And of course you can make and save your own workspaces very easily for whatever your workflow is. There are various ‘modes’ like Edit Mode (similar to PointsOn in Rhino if you aren’t familiar) that can be accessed in any 3D viewport in any workspace. This sort of modularity is something I would LOVE to see in Rhino, it is far more intuitive than the current viewport splitting system, and I think it would manage the diversity of workflows in Rhino better. You could have a drafting workspace that sets one Top viewport, shows you the layers panel, puts the drafting tool palette front-and-center. Or a rendering workspace that shows you the materials, UV, texture mapping, etc alongside a Named Viewport. Modelling workspaces could turn PointsOn automatically, Layer States could be assigned to each workspace which would make them much more useful in my opinion, I could go on. This could all be set up quickly by the user for their needs, which vary widely between the different fields Rhino is used in, eg. architecture vs. product design. As someone in the architecture field, even just having a drafting workspace set up to show me a top view and turn off everything but the layers with dimensions and plan drawings would be incredibly time/headache saving. Being able to switch between that and a typical modelling workspace (with OneView on by default perhaps?) with a single click would be fantastic. I understand that, that is what I have done actually and I’m glad it is that way. I should have worded my original post differently, the issue would arise if someone was contemplating buying a Mac and was persuaded to do so by the comparison page thinking that Rhino would be almost the same. That’s a good point, I think I can forget how bad cross-platform support used to be (and still is in some cases) compared to where it seems to be going now. Fair enough, I’m glad that Rhino 8 is heading in that direction. Despite my wingeing I am appreciative of the work you do. I largely agree, I appreciate the evolutionary approach to Rhino. I guess I’m just wondering if there is a way to maintain that evolution without creating bloatware. I think there is but indeed that will require even more time.
8585
dbpedia
0
89
https://pydoc.dev/sphinx/latest/sphinx.ext.apidoc.html
en
sphinx.ext.apidoc
[]
[]
[]
[ "" ]
null
[]
null
null
Parses a directory tree looking for Python modules and packages and creates ReST files appropriately to create code documentation with Sphinx. It also creates a modules index (named modules.<suffix>). This is derived from the "sphinx-autopackage" script, which is: Copyright 2008 Société des arts technologiques (SAT), https://sat.qc.ca/
8585
dbpedia
3
33
https://scriptingosx.com/2020/02/wrangling-pythons/
en
Wrangling Pythons
https://i0.wp.com/script…=800%2C557&ssl=1
https://i0.wp.com/script…=800%2C557&ssl=1
[ "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/11/cropped-NewShebang-1.png?fit=248%2C248&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2020/02/WranglingPython-Perseus.jpg?resize=800%2C510&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2020/02/WranglingPython-InstallDevToolsDialog.png?w=660&ssl=1" ]
[]
[]
[ "" ]
null
[ "Author ab" ]
2020-02-11T13:47:34+00:00
As I noted in my last Weekly News Summary, several open source projects for MacAdmins have completed their transition to Python 3. AutoPkg, JSSImport and outset announced Python 3 compatible versions last week and Munki already had the first Python 3 version last December. Why? Apple has included a version of Python 2 with Mac…
en
https://i0.wp.com/script…it=32%2C32&ssl=1
Scripting OS X
https://scriptingosx.com/2020/02/wrangling-pythons/
As I noted in my last Weekly News Summary, several open source projects for MacAdmins have completed their transition to Python 3. AutoPkg, JSSImport and outset announced Python 3 compatible versions last week and Munki already had the first Python 3 version last December. Why? Apple has included a version of Python 2 with Mac OS X since 10.2 (Jaguar). Python 3.0 was released in 2008 and it was not fully backwards compatible with Python 2. For this reason, Python 2 was maintained and updated alongside Python 3 for a long time. Python 2 was finally sunset on January 1, 2020. Nevertheless, presumably because of the compatibility issues, Apple has always pre-installed Python 2 with macOS and still does so in macOS 10.15 Catalina. With the announcement of Catalina, Apple also announced that in a “future version of macOS” there will be no pre-installed Python of any version. Scripting language runtimes such as Python, Ruby, and Perl are included in macOS for compatibility with legacy software. Future versions of macOS won’t include scripting language runtimes by default, and might require you to install additional packages. If your software depends on scripting languages, it’s recommended that you bundle the runtime within the app. (macOS 10.15 Catalina Release Notes) This also applies to Perl and Ruby runtimes and other libraries. I will be focussing on Python because it is used more commonly for MacAdmin tools, but most of this post will apply equally to Perl and Ruby. Just mentally replace “Python” for your preferred language. The final recommendation is what AutoPkg and Munki are following: they are bundling their own Python runtime. How to get Python There is a second bullet in the Catalina release notes, though: Use of Python 2.7 isn’t recommended as this version is included in macOS for compatibility with legacy software. Future versions of macOS won’t include Python 2.7. Instead, it’s recommended that you run python3 from within Terminal. (51097165) This is great, right? Apple says there is a built-in Python 3! And it’s pre-installed? Just move all your scripts to Python 3 and you’ll be fine! Unfortunately, not quite. The python3 binary does exist on a ‘clean’ macOS, but it is only a stub tool, that will prompt a user to download and install the Command Line Developer Tools (aka “Developer Command Line Tools” or “Command Line Tools for Xcode”). This is common for many tools that Apple considers to be of little interest to ‘normal,’ non-developer users. Another common example is git. When you install Xcode, you will also get all the Command Line Developer Tools, including python3 and git. This is useful for developers, who may want to use Python scripts for build operation, or for individuals who just want to ‘play around’ or experiment with Python locally. For MacAdmins, it adds the extra burden of installing and maintaining either the Command Line Developer Tools or the full Xcode install. Python Versions, a multitude of Snakes After installing Xcode or the Command Line Developer Tools, you can check the version of python installed: (versions on macOS 10.15.3 with Xcode 11.3.1) > python --version Python 2.7.16 > python3 --version Python 3.7.3 When you go on the download page for Python.org, you will get Python 3.8.1 (as of this writing). But, on that download page, you will also find download links for “specific versions” which include (as of this writing) versions 3.8.1, 3.7.6, 3.6.10, 3.5.9, and the deprecated 2.7.17. The thing is, that Python isn’t merely split into two major release versions, which aren’t fully compatible with each other, but there are several minor versions of Python 3, which aren’t fully compatible with each other, but are still being maintained in parallel. Developers (individuals, teams, and organisations) that use Python will often hold on to a specific minor (and sometimes even patch) version for a project to avoid issues and bugs that might appear when changing the run-time. When you install the latest version of Munki, it will install a copy of the Python framework in /usr/local/munki/ and create a symbolic link to that python binary at /usr/local/munki/python. You can check its version as well: % /usr/local/munki/python --version Python 3.7.4 All the Python code files for Munki will have a shebang (the first line in the code file) of #!/usr/local/munki/python This ensures that Munki code files use this particular instance of Python and no other copy of Python that may have been installed on the system. The latest version of AutoPkg has a similar approach: > /usr/local/autopkg/python --version Python 3.7.5 In both cases the python binary is a symbolic link. This allows the developer to change the symbolic link to point to a different Python framework. The shebangs in the all the code files point to the symbolic link, which can be changed to point to a different Python framework. This is useful for testing and debugging. Could MacAdmins use this to point both tools to the same Python framework? Should they? The Bridge to macOS On top of all these different versions of Python itself, many scripts, apps, and tools written in Python rely on ‘Python modules.’ These are libraries (or frameworks) of code for a certain task, that can be downloaded and included with a Python installation to extend the functionality of Python. The most relevant of these modules for MacAdmins is the “Python Objective-C Bridge.” This module allows Python code to access and use the native macOS Cocoa and CoreFoundation Frameworks. This not only allows for macOS native GUI applications to be written in Python (e.g. AutoDMG and Munki’s Managed Software Center [update: MSC was re-written in Swift last year]), but also allows short scripts to access system functions. This is sometimes necessary to get a data that matches what macOS applications “see” rather than what the raw unix tools see. For example, the defaults tool can be used to read the value of property lists on disk. But those might not necessarily reflect the actual preference value an application sees, because that value might be controlled by a different plist file or configuration profile. (Shameless self-promotion) Learn more about Property lists, Preferences and Profiles You could build a tool with Swift or Objective-C that uses the proper frameworks to get the “real” preference value. Or you can use Python with the Objective-C bridge: #!/usr/bin/python from Foundation import CFPreferencesCopyAppValue print CFPreferencesCopyAppValue("idleTime", "com.apple.screensaver") Three simple lines of Python code. This will work with the pre-installed Python 2.7, because Apple also pre-installs the Python Objective-C bridge with that. When you try this with the Developer Tools python3 you get an error: ModuleNotFoundError: No module named 'Foundation' This is because the Developer Tools do not include the Objective-C bridge in the installation. You could easily add it with: > sudo python3 -m pip install pyobjc But again, while this command is “easy” enough for a single user on a single Mac, it is just the beginning of a Minoan labyrinth of management troubles. Developers and MacAdmins, have to care about the version of the Python they install, as well as the list of modules and their versions, for each Python version. It is as if the Medusa head kept growing more smaller snakes for every snake you cut off. (Ok, I will ease off with Greek mythology metaphors.) You can get a list of modules included with the AutoPkg and the Munki project with: > /usr/local/munki/python -m pip list > /usr/local/autopkg/python -m pip list You will see that not only do Munki and AutoPkg include different versions of Python, but also a different list of modules. While Munki and AutoPkg share many modules, their versions might still differ. Snake Herding Solutions Apple’s advice in the Catalina Release Notes is good advice: It’s recommended that you bundle the runtime within the app. Rather than the MacAdmin managing a single version of Python and all the modules for every possible solution, each tool or application should provide its own copy of Python and its required modules. If you want to build your own Python bundle installer, you can use this script from Greg Neagle. This might seem wasteful. A full Python 3 Framework uses about 80MB of disk space, plus some extra for the modules. But it is the safest way to ensure that the tool or application gets the correct version of Python and all the modules. Anything else will quickly turn into a management nightmare. This is the approach that Munki and AutoPkg have chosen. But what about smaller, single script solutions? For example simple Python scripts like quickpkg or prefs-tool? Should I bundle my own Python framework with quickpkg or prefs-tool? I think that would be overkill and I am not planning to do that. I think the solution that Joseph Chilcote chose for the outset tool is a better approach for less complex Python scripts. In this case, the project is written to run with Python 3 and generic enough to not require a specific version or extra modules. An admin who wants to use this script or tool, can change the shebang (the first line in the script) to point to either the Developer Tool python3, the python3 from the standard Python 3 installer or a custom Python version, such as the Munki python. A MacAdmin would have to ensure that the python binary in the shebang is present on the Mac when the tool runs. You can also choose to provide your organization’s own copy Python with your chosen set of modules for all your management Python scripts and automations. You could build this with the relocatable Python tool and place it in a well-known location the clients. When updates for the Python run-time or modules are required, you can build and push them with your management system. (Thanks to Nathaniel Strauss for pointing out this needed clarifying.) When you build such scripts and tools, it is important to document which Python versions (and module versions) you have tested the tool with. (I still have to do that for my Python tools.) What about /usr/bin/env python? The env command will determine the path to the python binary in the current environment. (i.e. using the current PATH) This is useful when the script has to run in various environments where the location of the python binary is unknown. This is useful when developers want to use the same script in different environments across different computers, user accounts, and platforms. However, this renders the actual version of python that will interpret the script completely unpredictable. Not only is it impossible to predict which version of Python will interpret a script, but you cannot depend on any modules being installed (or their versions) either. For MacAdmin management scripts and tools, a tighter control is necessary. You should use fixed, absolute paths in the shebang. Conclusion Managing Python runtimes might seem like a hopeless sisyphean task. I believe Apple made the right choice to not pre-install Python any more. Whatever version and pre-selection of module versions Apple would have chosen, it would only have been the correct combination for a few Python solutions and developers. While it may seem wasteful to have a multitude of copies of the Python frameworks distributed through out the system, it is the easiest and most manageable solution to ensure that each tool or application works with the expected combination of run-time and modules.
8585
dbpedia
0
82
https://macintoshguy.wordpress.com/tag/autopkg/
en
The Macintosh Guy
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://macintoshguy.wordpress.com/wp-content/uploads/2020/06/1599px-inside_the_factory.jpg?w=1024", "https://i0.wp.com/www.linkedin.com/img/webpromo/btn_viewmy_160x33.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2020-07-10T09:46:18+10:00
Posts about autopkg written by Honestpuck
en
https://s1.wp.com/i/favicon.ico
The Macintosh Guy
https://macintoshguy.wordpress.com/tag/autopkg/
Here is my fourth post about PatchBot. In the first post I gave a short summary of how the system works and introduced JPCImporter, the first AutoPkg custom processor. In the second post I introduced patch management and the second custom processor. In the third post I showed the third custom processor and the code to run it at the right time. In the first three blog posts I explained (in great detail) how my system, PatchBot, works. Today I am going to cover how to take the pieces and put them together into a complete system. Continue reading → After my last post Graham Pugh mentioned that the AutoPkg repository list is stored in the AutoPkg preference file as RECIPE_REPOS with the search order in RECIPE_SEARCH_DIRS. He suggested doing a while loop on the defaults read output but I thought it was just fiddly enough a task in the shell that I might resort to a few lines of Python, so here it is, a Python script to dump out your repository list in search order. Tiny but it does the job. (Thanks to Graham for taking the time to comment on the previous post, it was just what I needed to get me to spend the few minutes doing this.) #!/usr/bin/env python3 # repos.py # print the list of AutoPkg repos in search order # NOTE: Totally lacking in any error checking or handling import plistlib from os import path plist = path.expanduser('~/Library/Preferences/com.github.autopkg.plist') fp = open(plist, 'rb') prefs = plistlib.load(fp) search = prefs['RECIPE_SEARCH_DIRS'] repos = prefs['RECIPE_REPOS'] # start at 3 to skip the built in ones for i in range(3, len(search)): print(repos[search[i]]['URL']) Here’s a little one for you. I needed to keep the recipe repositories in sync across two machines. AutoPkg will happily give you a repo-list- here’s part of mine: autopkg repo-list /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.48kRAM-recipes (https://github.com/autopkg/48kRAM-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.HobbitHardcase-recipes (https://github.com/autopkg/HobbitHardcase-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.MichalMMac-recipes (https://github.com/autopkg/MichalMMac-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.adobe-ccp-recipes (https://github.com/autopkg/adobe-ccp-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.arubdesu-recipes (https://github.com/autopkg/arubdesu-recipes) /Users/Anthony.WILLIAMS/Library/AutoPkg/RecipeRepos/com.github.autopkg.aysiu-recipes (https://github.com/autopkg/aysiu-recipes) Unfortunately that isn’t in a form you can feed to AutoPkg’s repo-add command. We need something like sed to make it right. Here we go. autopkg repo-list | sed "s#[^(]*(\([^)]*\)).*#\1#" https://github.com/autopkg/48kRAM-recipes https://github.com/autopkg/HobbitHardcase-recipes https://github.com/autopkg/MichalMMac-recipes https://github.com/autopkg/adobe-ccp-recipes https://github.com/autopkg/arubdesu-recipes https://github.com/autopkg/aysiu-recipes Now we add them on the other computer. Pipe the above into repos.txt and then: while read -r line ; do autopkg repo-add $line done < repos.txt Now if AutoPkg had an option to list the repositories in search order rather than alphabetical… Over the weekend I was feeling a little bored so I decided to try my hand at writing a shell script to add custom completion for autopkg to bash. (tl;dr – the script is on GitHub.) I found an example for the zsh shell which lacked a couple of features and I spent some time examining the script for brew so I wasn’t totally in the dark. There are a number of tutorials available for writing them but none are particularly detailed so that wasn’t much help. Writing Shell Scripts The first thing I should say is that I find writing shell scripts totally different to writing for any other language. I probably write shell scripts incredibly old school, shell and C were the two languages I was paid to write way back in the 1980’s. It feels like coming home. Continue reading →
8585
dbpedia
1
21
https://debricked.com/select/compare/pypi-python-for-android-vs-github-autopkg/autopkg-vs-github-Lukc/pkgxx
en
Debricked
https://debricked.com/fa…on-32x32.png?v=2
https://debricked.com/fa…on-32x32.png?v=2
[]
[]
[]
[ "" ]
null
[]
null
en
/apple-touch-icon.png?v=2
https://debricked.com/select/compare/pypi-python-for-android-vs-github-autopkg/autopkg-vs-github-Lukc/pkgxx
8585
dbpedia
2
35
https://community.jamf.com/t5/jamf-pro/what-do-you-use-python-scripting-in-jamf-pro-to-accomplish/m-p/148248
en
Jamf Nation Community
[ "https://community.jamf.com/legacyfs/online/avatars/b3475343316a4b8984d09cd7a17c76d1.png", "https://community.jamf.com/legacyfs/online/avatars/48d133ebb3b64dbca981c8701f6ec6d2.png", "https://community.jamf.com/legacyfs/online/avatars/6094cf30144d4be3a7c5fba0f718613d.png", "https://community.jamf.com/legacyfs/online/avatars/4d74ab3ffbf445d2b1a8859412192e72.png", "https://community.jamf.com/legacyfs/online/avatars/5b5b4e9e8bd049eeafb377121275fcf2.png", "https://community.jamf.com/legacyfs/online/avatars/48d133ebb3b64dbca981c8701f6ec6d2.png", "https://community.jamf.com/legacyfs/online/avatars/b3475343316a4b8984d09cd7a17c76d1.png", "https://community.jamf.com/legacyfs/online/avatars/8137a2d3699247db9d58332fd89b2f1a.png", "https://community.jamf.com/skins/images/3C7018BFED3E064C6B0C86CAD438737B/responsive_peak/images/icon_anonymous_message.png", "https://community.jamf.com/skins/images/3C7018BFED3E064C6B0C86CAD438737B/responsive_peak/images/icon_anonymous_message.png", "https://community.jamf.com/html/@DB007B9D4B38359F399423E43927D581/assets/logo-jamf-blk.svg" ]
[]
[]
[ "" ]
null
[ "community.jamf.com", "user-id" ]
2019-05-11T19:11:46+00:00
Solved: Hey, I am looking ideas to use Python to accomplish various tasks with Jamf. I am not sure where to get started or how powerful - 148248
en
https://community.jamf.com/html/@341C36E148083396DBCB6E6A9C18E572/assets/favicon.ico
Jamf Nation
https://community.jamf.com/t5/jamf-pro/what-do-you-use-python-scripting-in-jamf-pro-to-accomplish/m-p/148248#M137298
I started digging into Python about a year ago and thus far anything I can do in BASH I can do in Python. In my limited experience modules work just fine depending on the module of course. I have a large clustered Linux environment so I wrote a tool in Python that allows me to run commands against all of the nodes in a loop. This helps with stopping, rebooting, or restarting Tomcat for upgrades etc. I would recommend picking a problem or project that you would normally solve with BASH and figure out how to do it in Python. This list is slightly out of date, but is a still a very good list of Mac/Apple admin tools built with Python. https://github.com/timsutton/python-macadmin-tools Some of the best known are Munki, AutoPkg, dockutil, and outset. There are a ton of others. Python also gives you access to PyObjC which extends your scripting possibilities even further if you're interested in experimenting with Apple frameworks. Something like this - https://github.com/gregneagle/macaduk2017 Bash is still great and useful in many situations. You'll notice though that most open source Apple management projects aren't written in it. Instead they prefer Python, Ruby, Swift, etc. Guess I can't provide a definitive answer as to why, but in my opinion it's because those languages handle data processing much better than bash and have built in functions/modules for common tasks. Things like text manipulation, reading/writing JSON, looping over lists, etc. are just way easier. Think ultimately it's good to use the tool that works for your situation. If bash is that tool then go for it. Hi @kdean We are working on two tools, "python-jamf" and "jctl" and we will be presenting on them at JNUC 2021. Presentation will be named "Turn 1000 Clicks into 1 with python-jamf and jctl" `python-jamf` is a library for connecting to a Jamf Server that maps directly to the Jamf Pro Classic API. It is the basis for the `jctl` tool to automate patch management & packages and many other items. We are actively developing the tools and provides automation and performing tasks quickly on your Jamf Pro server. Here are the GitHub repositories, if you are interested in checking them out... python-jamf https://github.com/univ-of-utah-marriott-library-apple/python-jamf jctl https://github.com/univ-of-utah-marriott-library-apple/jctl We have a channel in MacAdmin's slack if you have any questions or need help with setup and usage: jctl
8585
dbpedia
3
96
https://www.adminschoice.com/8-best-text-editors-for-linux-desktop
en
8 Best Text Editors for Linux desktop
https://i0.wp.com/www.ad…it=32%2C32&ssl=1
https://i0.wp.com/www.ad…it=32%2C32&ssl=1
[]
[]
[]
[ "linux" ]
null
[ "hemantsh11" ]
2019-06-03T03:40:46-07:00
en
https://i0.wp.com/www.ad…it=32%2C32&ssl=1
Admin's Choice
https://www.adminschoice.com/8-best-text-editors-for-linux-desktop
Coding skills are essential to every developer to aid in the flawless web building process, online, and software development. To write engaging articles, writers leverage on free tools and follow due procedures. In the same way, coding goes beyond the mere writing of codes, and it requires some more effort, commitment, and lots of hard work. Before now, developers had to go through the arduous process of looking up every detail while coding. But not anymore, all thanks to text editing for Linux. Developers can code freely better without stressing about perfection. Text editors for Linux desktop offer functionality that makes the work of coding easier and better. Among the variety of option in the market, we are going to consider the best options that will help them to build fascinating apps. Brackets It was released by Adobe in 2014 with some unique features that help to simplify coding and a recent version called the Brackets 1.12. This Linux text editor was explicitly made for web designers and front end developers with live preview, inline editing and focused visuals. It was developed from the basics and based on HTML, CSS, and JavaScript. It also has additional features such as files navigation. Brackets is a powerful open source text editor with several configurations that makes real coding fun. Although it’s lightweight, it outstanding functionality far outweighs others similar software available to developers. More info at brackets.io Sublime Text This is considered as a favorite Linux editor to many developers. It’s a rich text editor with so many features to the user while supporting several markups and programming language. A good way to extend its functionality is by applying plugins that are under free software licenses. With the “goto anything” feature, developers can search the Sublime database for lines, symbols, and files. Other interesting features of the amazing Linux text editor is the project based preferences, Python-based API plugin, and simultaneous editing option. The software function as the perfect development environment due to the many plugins available for users. More info at Sublime Text Atom This one of the electron-based Linux text editors on the market. It was launched by GitHub and popularly referred to as the 21st-century text editor. It’s fully hackable, and its original features include auto package manager, cross-platform editing, multiple pane support, file system browser, replace and find functionality. Just like in other industries such as in sports education where there have been tremendous efforts to meet up with international demands, the tech world has been up to date employing both online and offline tactics in meeting up to the challenges ahead of it, especially in the areas of web design and software development. And one of the major propagators of such advancement is the Atom text editor for programmers. More info at atom.io VIM Are you tired of the default editor in Linux and want a more advanced option for your plagiarism check routine? VIM is the perfect text checker, and it has a lot of options for you. The developers added all these options to make it a powerful text checker. It has highly configurable options. What’s more, it can stand as a single GUI application or a command line utility. More info at vim.org Gedit This is an in-built text editor that is designed as a default Linux text editor for the GNOME desktop environment. It has a fabulous user interface with straightforward functionality, and it’s fully applicable to internationalized text. It’s lightweight and highly functional modules are the reasons why it’s still among the best Linux editors to date. More info at gnome Gedit EMACS This could be your favorite editor as it’s one of the oldest on the market with unique functionality. Little wonder why it is popular among Linux fans and developers globally. It was developed and founded by Richard Stallman. You can extend its functionality using Turing programming language. It’s a free editor that comes with the needed documentation and support system. It also has an array of extensions such as the debugger, news, and mail.. More info at GNU EMACS Visual Studio Code It comes with an array of options that programmers can leverage on to get their job done right on time. It’s an open source text editing software, and its VS code performance outweigh that of other software even if it is lightweight. All the features of this amazing software are fully compatible with debuggers and commands. It has the latest 2019 version, which is the VS Code 1.32. A powerful tool for develops to code without hassles. More info at code.visualstudio.com Nano This is just like the Pico text editor, and it was released in June 2000 for the Unix based operating systems. It was designed to meet the needs of Linux enthusiasts and anyone with the habit of experimenting with files. It has a case sensitive function, auto support configuration, auto indentation, tab completion, and searches function. These are some of the outstanding qualities that make it extremely different from other Linux text editors available. More info at GNU nano
8585
dbpedia
0
94
https://technology.siprep.org/script-autopkg-trust-verification-and-trust-update-process/
en
Script AutoPkg trust verification and trust update process
https://i0.wp.com/techno…=512%2C623&ssl=1
https://i0.wp.com/techno…=512%2C623&ssl=1
[ "https://secure.gravatar.com/avatar/80110b87b2dffff52bd80c41c1c2ab30?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/80110b87b2dffff52bd80c41c1c2ab30?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/80110b87b2dffff52bd80c41c1c2ab30?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/d5e86705d17d050640c7b65afa482d02?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/d5e86705d17d050640c7b65afa482d02?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/d5e86705d17d050640c7b65afa482d02?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/d5e86705d17d050640c7b65afa482d02?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/11c9f5ba04aed24513f1df03574301ef?s=32&d=mm&r=g", "https://secure.gravatar.com/avatar/497f4b7703eaeb75eb6104a852c10067?s=32&d=mm&r=g" ]
[]
[]
[ "" ]
null
[]
2017-04-26T19:34:55+00:00
Starting with version 1, AutoPkg began evaluating trust info for recipes, so you could see what changes were made to a recipe (if changes were made) and then accept the changes if you wanted to. He…
en
https://i0.wp.com/techno…it=26%2C32&ssl=1
St. Ignatius College Prep Tech Blog
https://technology.siprep.org/script-autopkg-trust-verification-and-trust-update-process/
8585
dbpedia
2
19
https://osxdominion.wordpress.com/tag/python/
en
OS X Dominion: Mastering OS X Management
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[ "Nick McSpadden" ]
2015-11-02T10:58:51-08:00
Posts about python written by Nick McSpadden
en
https://s1.wp.com/i/favicon.ico
OS X Dominion: Mastering OS X Management
https://osxdominion.wordpress.com/tag/python/
AutoPkg Wrapper Scripts There are myriad AutoPkg wrapper scripts/tools available out there: [Sean Kaiser’s AutoPkg Change Notifications script](http://seankaiser.com/blog/2013/12/16/autopkg-change-notifications/) [The Linde Group’s GUI-based AutoPkgr](http://www.lindegroup.com/autopkgr) [Allister Banks’ Jenkins-based script](https://www.afp548.com/2015/05/22/one-way-to-be-autopkging-the-jenkins/) [Ben Goodstein’s bash script](https://gist.github.com/fuzzylogiq/a2b922b41aa7d320dfc1) And several more… They all serve the same basic goal – run AutoPkg with a selection of recipes, and trigger some sort of notification / email / alert when an import succeeds, and when a recipe fails. This way, admins can know when something important has happened and make any appropriate changes to their deployment mechanism to incorporate new software. Everything Goes In Git Facebook is, unsurprisingly, big on software development. As such, Facebook has a strong need for source control in all things, so that code and changes can always be identified, reviewed, tested, and if necessary, reverted. Source control is an extremely powerful tool for managing differential changes among flat text files – which is essentially what AutoPkg is. Munki also benefits strongly, as all of Munki configuration is based solely on flat XML-based files. Pkginfo files, catalogs, and manifests all benefit from source control, as any changes made to the Munki repo will involve differential changes in (typically) small batches of lines relative to the overall sizes of the catalogs. Obvious note: binaries packages / files have a more awkward relationship with git and source control in general. Although it’s out of the scope of this blog post, I recommend reading up on Allister Banks’ article on git-fat on AFP548 and how to incorporate large binary files into a git repo. Git + Munki At Facebook, the entire Munki repo exists in git. When modifications are made or new packages are imported, someone on the Client Platform Engineering team makes the changes, and then puts up a differential commit for team review. Another member of the team must then review the changes, and approve. This way, nothing gets into the Munki repo that at least two people haven’t looked at. Since it’s all based on git, merging changes from separate engineers is relatively straightforward, and issuing reverts on individual packages can be done in a flash. AutoPkg + Munki AutoPkg itself already has a great relationship with git – generally all recipes are repos on GitHub, most within the AutoPkg GitHub organization, and adding a new repo generally amounts to a git clone. My initial attempts to incorporate AutoPkg repos into a separate git repo were a bit awkward. “Git repo within a git repo” is a rather nasty rabbit hole to go down, and once you get into git submodules you can see the fabric of reality tearing and the nightmares at the edge of existence beginning to leak in. Although submodules are a really neat tactic, regulating the updating of a git repo within a git repo and successfully keeping this going on several end point machines quickly became too much work for too little benefit. We really want to make sure that AutoPkg recipes we’re running are being properly source controlled. We need to be 100% certain that when we run a recipe, we know exactly what URL it’s pulling packages from and what’s happening to that package before it gets into our repo. We need to be able to track changes in recipes so that we can be alerted if a URL changes, or if more files are suddenly copied in, or any other unexpected developments occur. This step is easily done by rsyncing the various recipe repos into git, but this has the obvious downside of adding a ton of stuff to the repo that we may not ever use. The Goal The size and shape of the problem is clear: We want to put only recipes that we care about into our repo. We want to automate the updating of the recipes we care about. We want code review for changes to the Munki repo, so each package should be a separate git commit. We want to be alerted when an AutoPkg recipe successfully imports something into Munki. We want to be alerted if a recipe fails for any reason (typically due to a bad URL). We really don’t want to do any of this by hand. autopkg_runner.py Facebook’s Client Platform Engineering team has authored a Python script that performs these tasks: autopkg_runner.py. The Setup In order to make use of this script, AutoPkg needs to be configured slightly differently than usual. The RECIPE_REPO_DIR key should be the path to where all the AutoPkg git repos are stored (when added via autopkg add). The RECIPE_SEARCH_DIRS preference key should be reconfigured. Normally, it’s an array of all the git repos that are added with autopkg add (in addition to other built-in search paths). In this context, the RECIPE_SEARCH_DIRS key is going to be used to contain only two items – ‘.’ (the local directory), and a path to a directory inside your git repo that all recipes will be copied to (with rsync, specifically). As described earlier, this allows any changes in recipes to be incorporated into git differentials and put up for code review. Although not necessary for operation, I also recommend that RECIPE_OVERRIDE_DIRS be inside a git repo as well, so that overrides can also be tracked with source control. The entire Munki repo should also be within a git repo, obviously, in order to make use of source control for managing Munki imports. Notifications In the public form of this script, the create_task() function is empty. This can be populated with any kind of notification system you want – such as sending an email, generating an OS X notification to Notification Center (such as Terminal Notifier or Yo), filing a ticket with your ticketing / helpdesk system, etc. If run as is, no notifications of any kind will be generated. You’ll have to write some code to perform this task (or track me down in Slack or at a conference and badger me into doing it). What It Does The script has a list of recipes to execute inside (at line 33). These recipes are parsed for a list of parents, and all parent recipes necessary for executing these are then copied into the RECIPE_REPO_DIR from the AutoPkg preferences plist. This section is where you’ll want to put in the recipes that you want to run. Each recipe in the list is then run in sequence, and catalogs are made each time. This allows each recipe to create a full working git commit that can be added to the Munki git repo without requiring any other intervention (obviously into a testing catalog only, unless you shout “YOLO” first). Each recipe saves a report plist. This plist is parsed after each autopkg run to determine if any Munki imports were made, or if any recipes failed. The function create_task() is called to send the actual notification. If any Munki imports were made, the script will automatically change directory to the Munki repo, and create a git feature branch for that update – named after the item and the version that was imported. The changes that were made (the package, the pkginfo, and the changes to the catalogs) are put into a git commit. Finally, the current branch is switched back to the Master branch, so that each commit is standalone and not dependent on other commits to land in sequence. NOTE: the commits are NOT automatically pushed to git. Manual intervention is still necessary to push the commit to a git repo, as Facebook has a different internal workflow for doing this. An enterprising Python coder could easily add that functionality in, if so desired. Execution & Automation At this point, executing the script is simple. However, in most contexts, some automation may be desired. A straightforward launch daemon to run this script nightly could be used: Some Caveats on Automation Automation is great, and I’m a big fan of it. However, with any automated system, it’s important to fully understand the implications of each workflow. With this particular workflow, there’s a specific issue that might arise based on timing. Since each item imported into Munki via AutoPkg is a separate feature branch, that means that the catalog technically hasn’t changed when you run the .munki recipe against the Master branch. If you run this recipe twice in a row, AutoPkg will try to re-import the packages again, because the Master branch hasn’t incorporated your changes yet. In other words, you probably won’t want to run this script until your git commits are pushed into Master. This could be a potential timing issue if you are running this script on a constant time schedule and don’t get an opportunity to push the changes into master before the next iteration. I Feel Powerful Today, Give Me More If you are seeking even more automation (and feel up to doing some Python), you could add in a git push to make these changes happen right away. If you are only adding in items to a testing catalog with limited and known distribution, this may be reasonably safe way to keep track of all Munki changes in source control without requiring human intervention. Such a change would be easy to implement, since there’s already a helper function to run git commands – git_run(). Here’s some sample code that could incorporate a git push, which involves making some minor changes to the end of create_commit(): Conclusions Ultimately, the goal here is to remove manual work from a repetitive process, without giving up any control or the ability to isolate changes. Incorporating Munki and AutoPkg into source control is a very strong way of adding safety, sanity, and accountability to the Mac infrastructure. Although this blog post bases it entirely around git, you could accommodate a similar workflow to Mercurial, SVN, etc. The full take-away from this is to be mindful of the state of your data at all times. With source control, it’s easier to manage multiple people working on your repo, and it’s (relatively) easy to fix a mistake before it becomes a catastrophe. Source control has the added benefit of acting as an ersatz backup of sorts, where it becomes much easier to reconstitute your repo in case of disaster because you now have a record for what the state of the repo was at any given point in its history.
8585
dbpedia
3
55
https://robbmann.io/posts/emacs-treesit-auto/
en
Getting Emacs 29 to Automatically Use Tree-sitter Modes
https://robbmann.io/favicon-32x32.png
https://robbmann.io/favicon-32x32.png
[ "https://robbmann.io/img/robb_python_grey_huf4e52b91f6345de53e62f8c2a64f08ae_471021_192x192_fill_box_smart1_3.png" ]
[]
[]
[ "" ]
null
[ "Robert Enzmann" ]
2023-01-22T00:00:00-05:00
It's Robb, man!
en
/apple-touch-icon.png
robbmann
https://robbmann.io/posts/emacs-treesit-auto/
Recently, /u/casouri posted a guide to getting started with the new built-in tree-sitter capabilities for Emacs 29. In that post, they mention that there will be no automatic major-mode fallback for Emacs 29. That means I would have to use M-x python-ts-mode manually, or change the entry in auto-mode-alist to use python-ts-mode, in order to take advantage of the new tree-sitter functionality. Of course, that would still leave the problem of when the Python tree-sitter grammar isn’t installed, in which case python-ts-mode is going to fail. To solve this issue, I wrote a very small package that adjusts the new major-mode-remap-alist variable based on what grammars are ready on your machine. If a language’s tree-sitter grammar is installed, it will use that mode. If not, it will use the original major mode. Simple as that! For the impatient: treesit-auto.el # The package I wound up with is available on GitHub and MELPA as treesit-auto.el. So long as MELPA is on your package-archives list like this: Then you can use M-x package-refresh-contents followed by M-x package-install RET treesit-auto. If you also like having a local copy of the git repository itself, then package-vc-install is a better fit: Then, in your configuration file: See the README on GitHub for all the goodies you can put in the :config block. Origins of treesit-auto.el # The recommendation in Yuan’s article was to use define-derived-mode along with treesit-ready-p. In the NEWS (C-h n), however, I noticed a new variable major-mode-remap-alist, which at a glance appears suitable for a similar cause. For my Emacs configuration, I had two things I wanted to accomplish: Set all of the URLs for treesit-language-source-alist up front, so that I need only use treesit-install-language-grammar RET python RET, instead of writing out everything interactively Use the same list of available grammars to remap between tree-sitter modes and their default fallbacks Initially, I tried Yuan’s suggested approach with define-derived-mode, but I didn’t want to repeat code for every major mode I wanted fallback for. Trying to expand the major mode names correctly in a loop wound up unwieldy, because expanding the names properly for the define-derived-mode macro was too challenging for my current skill level with Emacs lisp, and wound up cluttering the global namespace more than I liked when auto-completing through M-x. Instead, I decided take a two step approach: Set up treesit-language-source-alist with the grammars I’ll probably use Loop over the keys in this alist to define the association between a tree-sitter mode and its default fallback through major-mode-remap-alist This makes the code we need to actually write a little simpler, since an association like python-mode to python-ts-mode can be automatic (since they share a name), and we can use a customizable alist for specifying the edge cases, such as toml-ts-mode falling back to conf-toml-mode. To start with, I just had this: At this point, I can just use M-x treesit-install-language-grammar RET bash to get the Bash grammar, and similarly for other languages. Then, I made an alist of the “weird” cases: Setting the CDR to nil explicitly means I didn’t want any type of fallback to be attempted whatsoever for a given tree-sitter mode, even if something similarly named might be installed. Finally, I had a simple loop where I constructed the symbols for the mode and the tree-sitter mode via intern and concat, and check whether the tree-sitter version is available through treesit-ready-p. If it is, we remap the base mode to the tree-sitter one in major-mode-remap-alist. If it isn’t ready, then we do the opposite: remap the tree-sitter mode to the base version.
8585
dbpedia
0
41
https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
en
sphinx.ext.autodoc – Include documentation from docstrings — Sphinx documentation
https://www.sphinx-doc.o…atic/favicon.svg
https://www.sphinx-doc.o…atic/favicon.svg
[ "https://www.sphinx-doc.org/en/master/_static/sphinx-logo.svg" ]
[]
[]
[ "" ]
null
[]
null
en
../../_static/favicon.svg
https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
These work exactly like autoclass etc., but do not offer the options used for automatic member documentation. autodata and autoattribute support the annotation option. The option controls how the value of variable is shown. If specified without arguments, only the name of the variable will be printed, and its value is not shown: .. autodata:: CD_DRIVE :annotation: If the option specified with arguments, it is printed after the name as a value of the variable: .. autodata:: CD_DRIVE :annotation: = your CD device name By default, without annotation option, Sphinx tries to obtain the value of the variable and print it after the name. The no-value option can be used instead of a blank annotation to show the type hint but not the value: .. autodata:: CD_DRIVE :no-value: If both the annotation and no-value options are used, no-value has no effect. For module data members and class attributes, documentation can either be put into a comment with special formatting (using a #: to start the comment instead of just #), or in a docstring after the definition. Comments need to be either on a line of their own before the definition, or immediately after the assignment on the same line. The latter form is restricted to one line only. This means that in the following class definition, all attributes can be autodocumented: class Foo: """Docstring for class Foo.""" #: Doc comment for class attribute Foo.bar. #: It can have multiple lines. bar = 1 flox = 1.5 #: Doc comment for Foo.flox. One line only. baz = 2 """Docstring for class attribute Foo.baz.""" def __init__(self): #: Doc comment for instance attribute qux. self.qux = 3 self.spam = 4 """Docstring for instance attribute spam.""" Changed in version 0.6: autodata and autoattribute can now extract docstrings. Changed in version 1.1: Comment docs are now allowed on the same line after an assignment. Changed in version 1.2: autodata and autoattribute have an annotation option. Changed in version 2.0: autodecorator added. Changed in version 2.1: autoproperty added. Changed in version 3.4: autodata and autoattribute now have a no-value option. Note If you document decorated functions or methods, keep in mind that autodoc retrieves its docstrings by importing the module and inspecting the __doc__ attribute of the given function or method. That means that if a decorator replaces the decorated function with another, it must copy the original __doc__ to the new function. This value controls whether the types of undocumented parameters and return values are documented when autodoc_typehints is set to description. The default value is "all", meaning that types are documented for all parameters and return values, whether they are documented or not. When set to "documented", types will only be documented for a parameter or a return value that is already documented by the docstring. With "documented_params", parameter types will only be annotated if the parameter is documented in the docstring. The return type is always annotated (except if it is None). Added in version 4.0. Added in version 5.0: New option 'documented_params' is added. A dictionary for users defined type aliases that maps a type name to the full-qualified object name. It is used to keep type aliases not evaluated in the document. Defaults to empty ({}). The type aliases are only available if your program enables Postponed Evaluation of Annotations (PEP 563) feature via from __future__ import annotations. For example, there is code using a type alias: from __future__ import annotations AliasType = Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]] def f() -> AliasType: ... If autodoc_type_aliases is not set, autodoc will generate internal mark-up from this code as following: .. py:function:: f() -> Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]] ... If you set autodoc_type_aliases as {'AliasType': 'your.module.AliasType'}, it generates the following document internally: .. py:function:: f() -> your.module.AliasType: ... Added in version 3.3.
8585
dbpedia
3
14
https://autopackage.soft112.com/
en
autopackage Free Download
https://www.soft112.com/…avicon-32x32.png
https://www.soft112.com/…avicon-32x32.png
[]
[]
[]
[ "autopackage publisher description", "publisher description", "autopackage download", "autopackage free download", "autopackage" ]
null
[]
2024-04-22T00:00:00
autopackage - Autopackage provides a framework for creating easy-to-use and multi-distribution Linux installers/packages.
https://www.soft112.com/…e-touch-icon.png
https://autopackage.soft112.com
autopackage is a free software published in the Other list of programs, part of Network & Internet. This program is available in English. It was last updated on 22 April, 2024. autopackage is compatible with the following operating systems: Linux. The company that develops autopackage is autopackage.org/. The latest version released by its developer is 1.0. This version was rated by 3 users of our site and has an average rating of 3.5. The download we have available for autopackage has a file size of . Just click the green Download button above to start the downloading process. The program is listed on our website since 2011-08-11 and was downloaded 3,251 times. We have already checked if the download link is safe, however for your own protection we recommend that you scan the downloaded software with your antivirus. Your antivirus may detect the autopackage as malware if the download link is broken. How to install autopackage on your Windows device: Click on the Download button on our website. This will start the download from the website of the developer. Once the autopackage is downloaded click on it to start the setup process (assuming you are on a desktop computer). When the installation is finished you should be able to see and run the program.
8585
dbpedia
0
16
https://macblog.org/autopkg-icons/
en
Automatically Export and Generate App Icons in AutoPkg Recipes
https://macblog.org/proc…e1e150e69eb.webp
https://macblog.org/proc…e1e150e69eb.webp
[ "https://macblog.org/processed_images/all-icons.9697d1810d061335.webp", "https://macblog.org/processed_images/example-composited-icons.1354ec2d4ddba6a3.webp", "https://macblog.org/processed_images/default-template-icons.6ca9070ab9e182cb.webp", "https://macblog.org/processed_images/example-custom-template.4d23aedbf67233f8.webp", "https://macblog.org/processed_images/padding-and-position.923b044e40c8580d.webp", "https://macblog.org/processed_images/chrome-custom-example.4b9ce254eaff892c.webp" ]
[]
[]
[ "" ]
null
[ "macblog.org" ]
2022-01-31T00:00:00
AppIconExtractor examines an app and exports its icon as a PNG image file (reading the CFBundleIconFile property from an app's Info.plist and saving that image as a PNG file. Additionally, AppIconExtractor can create icon variations by compositing a secondary image on top of the app's icon.
en
/apple-touch-icon.png
MacBlog
https://macblog.org/autopkg-icons
I'm a stickler for including icons for all policies available in Jamf Pro's Self Service app. They help users find items in Self Service, and generally make the app easier to use. However, I don't like manually extracting icons from apps. It's easy enough with a tool like SAP's Icons app, but if I'm automating package and policy creation with AutoPkg, I should similarly be able to automate icon creation, right? I created the AppIconExtractor AutoPkg processor to fully automate this task. At it's core, AppIconExtractor examines an app and exports its icon as a PNG image file. More technically, it reads the CFBundleIconFile property from an app's Info.plist and saves that image as a PNG file at the path of your choice. Additionally, ApplIconExtractor can create icon variations by compositing a secondary image on top of the app's icon. This makes it simple to automatically create a version of an icon with a destructive "red X" icon superimposed over the app icon for use in uninstallation policies, or a version with an "update" graphic for use in policies that update an app. Add my recipes and install the Pillow library First, you'll need my recipe repository available to your local AutoPkg installation. Add it with autopkg repo-add haircut-recipes. AppIconExtractor requires installation of the Pillow Python library. Pillow is used to convert and composite icons, and can be easily installed on the Mac you use to run AutoPkg. Use this command: /usr/local/autopkg/python -m pip install --upgrade Pillow Note that this installs the Pillow library within the path of AutoPkg's Python framework. This is very important. If you just run pip or pip3 without the explicit path to AutoPkg's Python installation, AutoPkg won't be able to find the library. Recipes will produce an error directing you to install Pillow using the specifc command above. With Pillow installed, you're ready to go. Basic use Using AppIconExtractor is as simple as including the processor as a step in a recipe's Process dictionary. Use the shared processor syntax to call com.github.haircut.processors/AppIconExtractor. It requires only one argument: source_app, which is the path to the .app from which to extract an icon. If the path to the app points inside a disk image, that .dmg will be mounted automatically. By default, the app's icon will be output to the recipe's cache directory as %NAME%.png%. You can optionally override this output path (and filename) by setting the icon_output_path argument. A simple example in XML format might look like: <key>Process</key> <array> <dict> <key>Processor</key> <string>com.github.haircut.processors/AppIconExtractor</string> <key>Arguments</key> <dict> <key>source_app</key> <string>%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app</string> </dict> </dict> </array> This will extract the icon from "CoolApp.app" and save it as a 256px square PNG image to the recipe cache directory. As mentioned, adding the icon_output_path argument will give you additional control over the output path and filename. Here's an example in YAML format: Process: - Processor: com.github.haircut.processors/AppIconExtractor Arguments: source_app: "%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app" icon_output_path: "%RECIPE_CACHE_DIR%/Icons/Icon-%NAME%.png" Generating composited variations Beyond extracting the app's icon, AppIconExtractor can also create variation images by compositing a "template image" on top of the app icon. The processor can output variations for an "uninstall," "update," and "install" version of the app icon. To generate a variation, add a processor argument to set an output path for that variation. Use one or more of the following arguments: composite_install_path composite_update_path composite_uninstall_path Omit any variations you don't want to generate. The processor will only create the variations you request be specify an output path. If you specify only output paths for variations, AppIconExtractor will use sensible defaults to composite suitable icons. The default templates are glyphs from SF Symbols that will work well in most situations. Each template is 64px in size, and looks nice in the corner. These templates are encoded within the processor; you don't need to do anything to use these defaults! Here's an example in YAML format: Process: - Processor: com.github.haircut.processors/AppIconExtractor Arguments: source_app: "%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app" icon_output_path: "%RECIPE_CACHE_DIR%/Icons/Icon-%NAME%.png" composite_update_path: "%RECIPE_CACHE_DIR%/Icons/Update-%NAME%.png" composite_uninstall_path: ""%RECIPE_CACHE_DIR%/Icons/Uninstall-%NAME%.png" Notice that we included arguments for the "update" and "uninstall" variations, but did not set the composite_install_path argument. This would output the "bare" app icon as well as variations for "update" and "uninstall" – but no "install" variation, since we omitted that argument. Custom templates If you don't like the default variation templates, you can use your own by setting composite_install_template, composite_update_template and/or composite_unsinstall_template. Each argument should be the path to alternative template image to use for that variation. AppIconExtractor will calculate the size of the template image at the path you specify and correctly anchor that template to the composite_position (see "Padding and position" below). Here's an example of using a custom template to generate an "uninstall" variation in XML format: <key>Process</key> <array> <dict> <key>Processor</key> <string>com.github.haircut.processors/AppIconExtractor</string> <key>Arguments</key> <dict> <key>source_app</key> <string>%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app</string> <key>composite_uninstall_template</key> <string>%RECIPE_DIR%/radical-flame.png</string> <key>composite_uninstall_path</key> <string>%RECIPE_CACHE_DIR%/delete_%NAME%.png</string> </dict> </dict> </array> Padding and position AppIconExtractor includes a few additional options to customize your composited icon variations. composite_padding: sets the number of pixels from the edge of the image the superimposed template image is offset. Defaults to 10 pixels. composite_position: sets the corner to which the superimposed template image for composited variations is anchored. Defaults to br for the bottom-right corner. You can change this to bl (bottom left), ur (upper right), or ul (upper left) if you prefer. Combinations of these options are shown below with the padding highlighted in pink: Clockwise from the upper left, this example shows: composite_padding omitted (so it defaults to 10) and composite_position of ul. composite_padding of 20 and composite_position of ur. composite_padding of 0 and composite_position omitted (so it defaults to br). composite_padding of 5 and composite_position of bl. Setting these options applies the same settings to all composited variations. This is an intentional design choice to keep the input arguments – and thus the required code – more manageable. Output variables AppIconExtractor sets the path(s) to the extracted app icon, and any composited variations, as output variables during an AutoPkg run. This means you can extract and generate icons, then immediately use those icons in subsequent processors like JamfPolicyUploader. The following output variables are set if (and only if) the associated variations are requested: app_icon_path: path to the extracted, unmodified app icon. Always set. install_icon_path path to the composited "install" variation. Only set if this variations is requested. update_icon_path path to the composited "update" variation. Only set if this variations is requested. uninstall_icon_path path to the composited "uninstall" variation. Only set if this variations is requested. Example uses in recipes Here are two examples of using AppIconExtractor in a child recipe or override. Extract the icon from an available .app bundle Greg Neagle's recipe for Sublime Text 4 leaves the unarchived .app available in the recipe cache dir at %RECIPE_CACHE_DIR%/%NAME%/Sublime Text.app. We'll use this to extract the Sublime Text icon using AppIconExtractor's default settings. Process: - Processor: com.github.haircut.processors/AppIconExtractor Arguments: source_app: "%RECIPE_CACHE_DIR%/%NAME%/Sublime Text.app" - Processor: com.github.grahampugh.jamf-upload.processors/JamfPolicyUploader Arguments: policy_template: "%POLICY_TEMPLATE%" policy_name: "%POLICY_NAME%" icon: "%app_icon_path%" replace_icon: True This extracts Sublime Text's icon without generating any composite variations, then feeds that extracted icon to the JamfPolicyUploader processor. We set replace_icon to True to ensure any change to the icon by the vendor is automatically reflected within our Jamf policy. Unpacking a .pkg to extract an icon The recipe for the Google Chrome Enterprise package downloads a .pkg directly from the vendor, so no repackaging is needed. And while the AutoPkg recipe unpacks the package to performe code signature verification, it then runs the PathDeleter processor to clean up that operation. This means a child recipe does not have access to a .app from which to extract an icon. That means we'll need to do a little more work to unpack the package again so that we can get to the app bundle. We'll also generate a custom "uninstall" variations and override the default composition position and padding. Here's the Process of this more complex example in XML format: <key>Process</key> <array> <dict> <key>Processor</key> <string>FlatPkgUnpacker</string> <key>Arguments</key> <dict> <key>destination_path</key> <string>%RECIPE_CACHE_DIR%/unpack</string> <key>flat_pkg_path</key> <string>%pkg_path%</string> </dict> </dict> <dict> <key>Processor</key> <string>PkgPayloadUnpacker</string> <key>Arguments</key> <dict> <key>destination_path</key> <string>%RECIPE_CACHE_DIR%/unpack/pkgpayload</string> <key>pkg_payload_path</key> <string>%RECIPE_CACHE_DIR%/unpack/GoogleChrome.pkg/Payload</string> </dict> </dict> <dict> <key>Processor</key> <string>com.github.haircut.processors/AppIconExtractor</string> <key>Arguments</key> <dict> <key>composite_padding</key> <integer>20</integer> <key>composite_position</key> <string>ul</string> <key>composite_uninstall_path</key> <string>%RECIPE_CACHE_DIR%/Icon-Uninstall-%NAME%.png</string> <key>composite_uninstall_template</key> <string>/Users/haircut/Documents/delete.png</string> <key>icon_output_path</key> <string>%RECIPE_CACHE_DIR%/Icon-%NAME%.png</string> <key>source_app</key> <string>%RECIPE_CACHE_DIR%/unpack/pkgpayload/Google Chrome.app</string> </dict> </dict> <dict> <key>Processor</key> <string>PathDeleter</string> <key>Arguments</key> <dict> <key>path_list</key> <array> <string>%RECIPE_CACHE_DIR%/unpack</string> </array> </dict> </dict> <dict> <key>Processor</key> <string>com.github.grahampugh.jamf-upload.processors/JamfPolicyUploader</string> <key>Arguments</key> <dict> <key>icon</key> <string>%app_icon_path%</string> <key>policy_name</key> <string>Install %NAME%</string> <key>policy_template</key> <string>Self-Service-Policy.xml</string> <key>replace_icon</key> <true/> </dict> </dict> <dict> <key>Processor</key> <string>com.github.grahampugh.jamf-upload.processors/JamfPolicyUploader</string> <key>Arguments</key> <dict> <key>icon</key> <string>%uninstall_icon_path%</string> <key>policy_name</key> <string>Uninstall %NAME%</string> <key>policy_template</key> <string>Uninstall-Policy.xml</string> <key>replace_icon</key> <true/> </dict> </dict> </array> This unpacks the Google Chrome enterprise package, extracts the unmodified app icon, and generates an "uninstall" composite version with a custom graphic in the upper left corner with 20px of padding. The outputs of AppIconExtractor are then used as inputs to JamfPolicyUploader process runs to set the icons for two different policies. Setting the replace_icon argument to True ensures that any changes to the icons are reflected on the Jamf Pro policies. Hopefully this processor will help you extract icons without the manual work, and spiff up those Self Service policies.
8585
dbpedia
3
43
https://parceljs.org/features/development/
en
Development
https://parceljs.org/assets/og.png
https://parceljs.org/assets/og.png
[ "https://parceljs.org/parcel.fb905a63.png", "https://parceljs.org/logo.49e8bbc1.svg", "https://parceljs.org/image.2d5f6c1f.svg", "https://parceljs.org/react.71dfc3a7.svg", "https://parceljs.org/webext.df55febd.svg", "https://parceljs.org/javascript.8e522547.svg", "https://parceljs.org/html5.8e9b85e2.svg", "https://parceljs.org/postcss.e1ddbaa1.svg", "https://parceljs.org/svg.70ad37f9.svg", "https://parceljs.org/typescript.3f240efe.svg", "https://parceljs.org/coffeescript.65ea83d0.svg", "https://parceljs.org/sass.c79925d8.svg", "https://parceljs.org/stylus.6d90e346.svg", "https://parceljs.org/less.d647b6fb.svg", "https://parceljs.org/sugarss.82d72cf4.svg", "https://parceljs.org/vue.f537d6f0.svg", "https://parceljs.org/elm.57b1d733.svg", "https://parceljs.org/json.9d49c9f7.svg", "https://parceljs.org/toml.c57411f8.svg", "https://parceljs.org/graphql.abe88238.svg", "https://parceljs.org/yaml.7efa81f5.svg", "https://parceljs.org/openGL.f24bd5d9.svg", "https://parceljs.org/pug.4241fb92.svg", "https://parceljs.org/mdx.6d2aad45.svg", "https://parceljs.org/xml.c5f44a73.svg" ]
[]
[]
[ "" ]
null
[]
null
Parcel includes a development server out of the box supporting hot reloading, HTTPS, an API proxy, and more.
en
https://parceljs.org/favicon.fe6f9d11.ico
https://parceljs.org/features/development/
Parcel includes a development server out of the box supporting hot reloading, HTTPS, an API proxy, and more. Dev server # Parcel’s builtin dev server is automatically started when you run the default parcel command, which is a shortcut for parcel serve. By default, it starts a server at http://localhost:1234. If port 1234 is already in use, then a fallback port will be used. After Parcel starts, the location where the dev server is listening will be printed to the terminal. The dev server supports several options, which you can specify via CLI options: -p, --port – Overrides the default port. The PORT environment variable can also be used to set the port. --host – By default, the dev server accepts connections on all interfaces. You can override this to specify that only connections from certain hosts should be accepted. --open – Automatically opens the entry in your default browser after Parcel starts. You can also pass a browser name to open a different browser, e.g. --open safari. Hot reloading # As you make changes to your code, Parcel automatically rebuilds the changed files and updates your app in the browser. By default, Parcel fully reloads the page, but in some cases it may perform Hot Module Replacement (HMR). HMR improves the development experience by updating modules in the browser at runtime without needing a whole page refresh. This means that application state can be retained as you change small things in your code. CSS changes are automatically applied via HMR with no page reload necessary. This is also true when using a framework with HMR support built in, like React (via Fast Refresh), and Vue. If you’re not using a framework, you can opt into HMR using the module.hot API. This will prevent the page from being reloaded, and instead apply the update in-place. module.hot is only available in development, so you'll need to check that it exists before using it. if (module.hot) { module.hot.accept(); } HMR works by replacing the code for a module, and then re-evaluating it and along with all of its parents. If you need to customize this process, you can hook into it using the module.hot.accept and module.hot.dispose methods. These let you save and restore state inside the new version of the module. module.hot.dispose accepts a callback which is called when that module is about to be replaced. Use it to save any state to restore in the new version of the module in the provided data object, or cleanup things like timers that will be re-created in the new version. module.hot.accept accepts a callback function which is executed when that module or any of its dependencies are updated. You can use this to restore state from the old version of the module using the data stored in module.hot.data. if (module.hot) { module.hot.dispose(function (data) { data.updated = Date.now(); }); module.hot.accept(function (getParents) { let { updated } = module.hot.data; }); } Development target # When using the dev server, only a single target can be built at once. By default, Parcel uses a development target that supports modern browsers. This means that transpilation of modern JavaScript syntax for older browsers is disabled. If you need to test in a older browser, you can provide the --target CLI option to choose which of your targets to build. For example, to build the "legacy" target defined in your package.json, use --target legacy. If you don't have any explicit targets defined, and only have a browserslist in your package.json, you can use the implicit default target with --target default. This will result in your source code being transpiled just as it would be in production. See the Targets documentation for more information. Lazy mode # In development, it can be frustrating to wait for your entire app to build before the dev server starts up. This is especially true when working on large apps with many pages. If you’re only working on one feature, you shouldn’t need to wait for all of the others to build unless you navigate to them. You can use the --lazy CLI flag to tell Parcel to defer building files until they are requested in the browser, which can significantly reduce development build times. The server starts quickly, and when you navigate to a page for the first time, Parcel builds only the files necessary for that page. When you navigate to another page, that page will be built on demand. If you navigate back to a page that was previously built, it loads instantly. parcel 'pages/*.html' --lazy This also works with dynamic import(), not just separate entries. So if you have a page with a dynamically loaded feature, that feature will not be built until it is activated. When it is requested, Parcel eagerly builds all of the dependencies as well, without waiting for them to be requested. Caching # Parcel caches everything it builds to disk. If you restart the dev server, Parcel will only rebuild files that have changed since the last time it ran. Parcel automatically tracks all of the files, configuration, plugins, and dev dependencies that are involved in your build, and granularly invalidates the cache when something changes. For example, if you change a configuration file, all of the source files that rely on that configuration will be rebuilt. By default, the cache is stored in the .parcel-cache folder inside your project. You should add this folder to your .gitignore (or equivalent) so that it is not committed in your repo. You can also override the location of the cache using the --cache-dir CLI option. Caching can also be disabled using the --no-cache flag. Note that this only disables reading from the cache – a .parcel-cache folder will still be created. HTTPS # Sometimes, you may need to use HTTPS during development. For example, you may need to use a certain hostname for authentication cookies, or debug mixed content issues. Parcel’s dev server supports HTTPS out of the box. You can either use an automatically generated certificate, or provide your own. To use an automatically generated self-signed certificate, use the --https CLI flag. The first time you load the page, you may need to manually trust this certificate in your browser. parcel src/index.html --https To use a custom certificate, you’ll need to use the --cert and --key CLI options to specify the certificate file and private key respectively. parcel src/index.html --cert certificate.cert --key private.key API proxy # To better emulate the actual production environment when developing web apps, you can specify paths that should be proxied to another server (e.g. your real API server or a local testing server) in a .proxyrc, .proxyrc.json or .proxyrc.js file. .proxyrc / .proxyrc.json # In this JSON file, you specify an object where every key is a pattern against which the URL is matched and the value is a http-proxy-middleware options object: This example would cause http://localhost:1234/api/endpoint to be proxied to http://localhost:8000/endpoint. .proxyrc.js # For more complex configurations, a .proxyrc.js file allows you to attach any connect-compatible middleware. First, make sure you install http-proxy-middleware into your project. This example has the same behaviour as the .proxyrc version above. If you would like to write this as an ES module instead, you can do so using a .proxyrc.mjs file, or by using the "type": "module" option in your package.json. File watcher # To support an optimal caching and development experience Parcel utilizes a very fast watcher written in C++ that integrates with low-level file watching functionality of each operating system. Using this watcher Parcel watches every file in your project root (including all node_modules). Based on events and metadata from these files, Parcel determines which files need to be rebuilt. Known issues with file watching # Safe Write # Some text editors and IDE's have a feature called "safe write" that prevents data loss by taking a copy of the file and renaming it when saved. However, this feature can prevent automatic detection of file updates. To disable safe write, use the options provided below: Sublime Text 3: add atomic_save: "false" to your user preferences. IntelliJ: use search in the preferences to find "safe write" and disable it. Vim: add :set backupcopy=yes to your settings. WebStorm: uncheck Use "safe write" in Preferences > Appearance & Behavior > System Settings. vis: add :set savemethod inplace to your settings. Linux: No space left on device # Depending on the size of your project, and your operating system's watcher limit, this error might pop up when you're running Parcel on Linux. To resolve this issue, change the sysctlconfiguration for fs.inotify to have a higher value for max_user_watches. You can do this by adding or changing the following lines in /etc/sysctl.conf: fs.inotify.max_queued_events = 16384 fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 16384 If this error persists you can try increasing the values even more. Using Dropbox, Google Drive or other cloud storage solutions # It is best practice to not place a Parcel project in a folder that is synced to the cloud using something like Dropbox or Google Drive. These solutions create a lot of file system events that can mess with our watcher and cause unnecessary rebuilds. Auto install # When you use a language or plugin that isn’t included by default, Parcel will automatically install the necessary dependencies into your project for you. For example, if you include a .sass file, Parcel will install the @parcel/transformer-sass plugin. When this happens, you'll see a message in the terminal, and the new dependency will be added to the devDependencies in your package.json. Parcel automatically detects which package manager you use in your project based on the lock file. For example, if yarn.lock is found, then Yarn will be used to install packages. If no lock file is found, then the package manager is chosen based on what is installed on your system. The following package managers are currently supported, listed in priority order: Yarn Pnpm Npm Auto install only occurs during development by default. During production builds, if a dependency is missing, the build will fail. You can also disable auto install during development using the --no-autoinstall CLI flag.
8585
dbpedia
3
34
https://ideone.com/l/java
en
Online Java compiler
http://profile.ak.fbcdn.net/hprofile-ak-prn1/50232_245768360841_3377786_q.jpg
http://profile.ak.fbcdn.net/hprofile-ak-prn1/50232_245768360841_3377786_q.jpg
[ "https://d2c5ubcnqbm27w.cloudfront.net/gfx/loader.gif", "https://d2c5ubcnqbm27w.cloudfront.net/gfx2/img/spoj.png", "https://ideone.com/gfx2/img/facebook-box.png", "https://d2c5ubcnqbm27w.cloudfront.net/gfx/loader.gif" ]
[]
[]
[ "online compiler", "online ide", "learn programming online", "programming online", "run code online", "snippet", "snippets", "pastebin", "online debugging tool", "online interpreter", "run your code online", "run code", "execute code", "C++", "Java", "Python" ]
null
[]
null
Compile Java online. Add input stream, save output, add notes and tags.
en
//d2c5ubcnqbm27w.cloudfront.net/gfx2/img/favicon.png
Ideone.com
null
Discover > Sphere Engine API The brand new service which powers Ideone! Discover > IDE Widget Widget for compiling and running the source code in a web browser!
8585
dbpedia
2
97
https://docs.yoctoproject.org/2.4.1/ref-manual/ref-manual.html
en
Yocto Project Reference Manual
[ "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/YP-flow-diagram.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/building-an-image.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/buildhistory.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/buildhistory-web.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/git-workflow.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/source-repos.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/index-downloads.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/yp-download.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/yocto-environment-ref.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/user-configuration.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/layer-input.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/source-input.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/package-feeds.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/source-fetching.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/patching.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/configuration-compile-autoreconf.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/analysis-for-package-splitting.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/image-generation.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/sdk-generation.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/images.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/sdk.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/cross-development-toolchains.png", "https://docs.yoctoproject.org/2.4.1/ref-manual/figures/build-workspace-directory.png" ]
[]
[]
[ "" ]
null
[ "Scott Rifenbark" ]
null
null
Class files are used to abstract common functionality and share it amongst multiple recipe (.bb) files. To use a class file, you simply make sure the recipe inherits the class. In most cases, when a recipe inherits a class it is enough to enable its features. There are cases, however, where in the recipe you might need to set variables or override some default behavior. Any Metadata usually found in a recipe can also be placed in a class file. Class files are identified by the extension .bbclass and are usually placed in a classes/ directory beneath the meta*/ directory found in the Source Directory. Class files can also be pointed to by BUILDDIR (e.g. build/) in the same way as .conf files in the conf directory. Class files are searched for in BBPATH using the same method by which .conf files are searched.
8585
dbpedia
3
0
https://en.wikipedia.org/wiki/Autopackage
en
Autopackage
https://upload.wikimedia…package-logo.png
https://upload.wikimedia…package-logo.png
[ "https://en.wikipedia.org/static/images/icons/wikipedia.png", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Autopackage-logo.png/120px-Autopackage-logo.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Autopackage_ready_to_install_software.png/220px-Autopackage_ready_to_install_software.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Autopackage_installing_software.png/250px-Autopackage_installing_software.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/28px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/10px-OOjs_UI_icon_edit-ltr-progressive.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/NewTux.svg/13px-NewTux.svg.png", "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Free_and_open-source_software_logo_%282009%29.svg/16px-Free_and_open-source_software_logo_%282009%29.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/d/db/Symbol_list_class.svg/16px-Symbol_list_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/16px-Symbol_category_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/9c/Symbol_file_class.svg/16px-Symbol_file_class.svg.png", "https://login.wikimedia.org/wiki/Special:CentralAutoLogin/start?type=1x1", "https://en.wikipedia.org/static/images/footer/wikimedia-button.svg", "https://en.wikipedia.org/static/images/footer/poweredby_mediawiki.svg" ]
[]
[]
[ "" ]
null
[ "Contributors to Wikimedia projects" ]
2005-05-14T12:01:14+00:00
en
/static/apple-touch/wikipedia.png
https://en.wikipedia.org/wiki/Autopackage
Linux package management system AutopackageOriginal author(s)Mike HearnDeveloper(s)Jan Niklas HasseInitial releaseAround 2002; 22 years ago ( )Stable release 1.4.2[1] / May 24, 2009; 15 years ago ( ) Written inBash, C, C++ and PythonOperating systemLinuxTypePackage management systemLicenseGNU Lesser General Public LicenseWebsiteautopackage.org at the Wayback Machine (archive index) Autopackage at Google Project Hosting Autopackage is a free computer package management system aimed at making it simple to create a package that can be installed on all Linux distributions, created by Mike Hearn around 2002. In August 2010, Listaller and Autopackage announced that the projects will merge.[2] Projects such as aMSN and Inkscape offered an Autopackage installer, and Freecode offered content submitters a field to put the URL of Autopackages. The list of available packages is very limited, and most program versions are obsolete (for example, the most recent Autopackage of GIMP is 2.2.6, even though GIMP is now at version 2.8.2, as of August 2012).[3][4] Methodology [edit] Autopackage was designed for installing binary, or pre-compiled, versions of non-core applications such as word processors, web browsers, and personal computer games, rather than core libraries and applications such as operating system shells. Concept of autopackage was to "improve" Linux to a desktop platform, with stable binary interfaces comparable to Windows and MacOS.[5] Autopackage is not intended to provide installation of core applications and libraries for compatibility reasons. Using Autopackage to distribute non-core libraries is something of a thorny issue. On the one hand distributing them via Autopackage allows installation on a greater range of systems, on the other hand there can be conflicts with native package dependencies. Autopackage is intended as a complementary system to a distribution's usual packaging system, such as RPM and deb. Unlike these formats, Autopackage verifies dependencies by checking for the presence of deployed files, rather than querying a database of installed packages. This simplifies the design requirements for autopackage by relying on available resources, rather than necessitating tracking all the package choices of all targeted distributions.[6] Programs that use autopackage must also be relocatable, meaning they must be installable to varying directories with a single binary. This enables an autopackage to be installed by a non-root user in the user's home directory. Package format [edit] Autopackage packages are indicated by the .package extension. They are executable bash scripts, and can be installed by running them. Files in an Autopackage archive are not easily extracted by anything other than Autopackage itself as the internal format must be parsed in order to determine file layout and other issues.[7] Autopackage programs are installed to hard-coded system paths, which may conflict with existing packages installed by other means, thus leading to corruption. This can usually be remedied by uninstalling an older version of a package being installed with Autopackage. The Autopackage files can also be installed and removed using the Listaller toolset.[8] Listaller simply includes the Autopackage packages into its own package container format and handles Autopackage like any other Listaller package file. See also [edit] Free and open-source software portal AppImage Flatpak Listaller Package management system Bundle (software distribution) Linux package formats List of software package management systems References [edit]
8585
dbpedia
0
20
https://groups.google.com/g/autopkg-discuss/c/40SMK8cSl74
en
Conditional PkgCopier
https://www.gstatic.com/…/groups_32dp.png
https://www.gstatic.com/…/groups_32dp.png
[ "https://fonts.gstatic.com/s/i/productlogos/groups/v9/web-48dp/logo_groups_color_1x_web_48dp.png", "https://lh3.googleusercontent.com/a/default-user=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjWu0Qpts5G27fhbKDgM8Cr5-KSGhd_8rRIFPTG63Y6FMUV332Ag0A=s40-c" ]
[]
[]
[ "" ]
null
[]
null
en
//www.gstatic.com/images/branding/product/1x/groups_32dp.png
https://groups.google.com/g/autopkg-discuss/c/40SMK8cSl74
Anthony Reimer unread, to autopkg...@googlegroups.com I am working on some new recipes that feed AutoPkg packages into my DeployStudio repo using the PkgCopier processor. The only problem with my recipes is that it will always copy the package to my DS repo, even if no updated package is generated. The PkgCreator process helpfully outputs a variable (new_pkg_request) that could be used to avoid superfluous copying. For reference, I have a couple of those recipes in my GitHub repo: https://github.com/jazzace/AutoPkg-recipes My guess is that the best way to solve this would be to write a new processor that has that feature, but I don't know Python. So, Pythonistas out there: how hard would this processor be to write? I'm willing to make this my first Python project if it is a good project to learn the language. Anthony Reimer University of Calgary Timothy Sutton unread, to autopkg...@googlegroups.com The PkgCopier was written mainly for the purpose of copying a package out of an installer or bundle (think Flash Player), rather than copying a result to a final destination, so there wasn't a consideration for it to look for existing packages first. It's good you picked up on the 'new_package_request' output variable. Take a look at the StopProcessingIf processor: https://github.com/autopkg/autopkg/blob/f37212bd9780896c4d4d80af1a13bca1fb1aae6c/Code/autopkglib/StopProcessingIf.py#L29-L35 This processor stops processing the recipe any further if the NSPredicate evaluates to true. This is the same underlying mechanism for Munki's conditional_items in manifests. StopProcessingIf is the only core AutoPkg processor that implements any conditional logic, which we're pretty wary about encouraging. However, it's still there and works for situations like this. So, by adding StopProcessingIf before your PkgCopier processor, we can stop this recipe from going any further, and we didn't need to write a custom processor: https://gist.github.com/timsutton/d834f81feaa4d1190573#file-firefox-ds-plist-L37-L45 Below that recipe file in the gist is the output of me running autopkg a second time to a temporary repo location. For what it's worth, you could probably also just use the Copier processor here instead, but the result is the same if you're just copying package files. Tim  > -- > You received this message because you are subscribed to the Google Groups "autopkg-discuss" group. > To unsubscribe from this group and stop receiving emails from it, send an email to autopkg-discu...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout.
8585
dbpedia
0
77
https://blog.eisenschmiede.com/posts/create-dock-items-with-autopkg/
en
Create Dock Items With Autopkg
https://blog.eisenschmiede.com/
https://blog.eisenschmiede.com/
[ "https://blog.eisenschmiede.com/pics/JamfPro_DockItems.png", "https://blog.eisenschmiede.com/pics/JamfPro_DockItem_Policy.png" ]
[]
[]
[ "", "" ]
null
[]
null
I always prefered to create dock items or entries to enhance the user experience on my managed macOS clients. Not so savy macOS user can find their newly installed applications easily in the dock (in my experience especially Windows users think that the dock is their only way to start software) and experienced user simply get a good indication, that the software was successfully installed (the feedback of Jamf&rsquo;s Self Service is a bit lacking in my opinion).
en
https://blog.eisenschmie…icon-192x192.png
Blog - eisenschmiede.com
https://blog.eisenschmiede.com/posts/create-dock-items-with-autopkg/
I always prefered to create dock items or entries to enhance the user experience on my managed macOS clients. Not so savy macOS user can find their newly installed applications easily in the dock (in my experience especially Windows users think that the dock is their only way to start software) and experienced user simply get a good indication, that the software was successfully installed (the feedback of Jamf’s Self Service is a bit lacking in my opinion). In the past I used great tools like dockutil or docklib to manage dockitems. I predeployed these binaries, and used scripts in the Self Service policies (after the installation) to create the dock items. And yet I searched for an alternative, when I started to build up a new Jamf instance. Dockutil triggers a deprecation warning on macOS 12 Montery (see this issue), since it’s running on python 2.7. You could supress the warning (see Graham’s great blog post), but I didn’t want to build up a new instance with workarounds. Docklib is working great on Montery, but seemed a bit overkill for my needs. I only want to add dock items of applications at the end of the dock. No need for precise positioning or creating dock items for multiple users on a single machine. Jamf’s Dock Items⌗ Jamf Pro provides a mechanism to create dock items for a long time. You can predefine the item in the global settings under Computer Management: After defining them, you can use the item in a policy in the Dock Item Section: You have the possability to add the item to the end or the beginning of the dock, or remove it. Drawbacks compared to the other solutions⌗ Like I already mentioned, you are not able to postion the item precisely. But there are also other positive and negative aspects compared to docklib and dockutil: Positive⌗ No custom binary needed on the client (which needs to be installed first and updated) -> Less overhead No python dependency -> No need to manage and install python3 like when you want to use docklib Fully manageable via AutoPKG with my written custom processor Easy to use Negative⌗ Pretty unflexible Not suitable for multiuser deployments No way to control dock restarts (the dock process get’s always killed by Jamf. This causes a ‘screen flashing’ for the user and minimized apps open up again) Usage via AutoPKG⌗ I guess Jamf’s Dock Items weren’t used often in the “macOS Automation Community”, since a lot of ‘mouse work’ was needed to create a dock entry (create it first in the general settings, switch back to the policy and add it there…). So I decided to write a AutoPKG custom processor, which could be used in recipes to create dock entries and automate the process. I took Graham Pugh’s great JamfUploader as a base, and wrote some code to use Jamf’s API (sadly the classic API is needed, since there is no way to create dock items in the v2 API). Graham was so kind to merge my code into his repo, so you can find the processor in his default recipe repo: autopkg/grahampugh-recipes. If you use already his uploaders, simply update the repo. If you never used his processors, add the repo to AutoPKG: autopkg repo-add grahampugh-recipes. After that you can simply use the processor com.github.grahampugh.jamf-upload.processors/JamfDockItemUploader in your recipes. For example: After the creation of a dock item, you can use it’s name in your policy template: A full recipe to create an installer and upload a category, the package itself, the dock item and a policy to install the application could look like this:
8585
dbpedia
2
15
https://pypi.org/project/autotyping/
en
autotyping
https://pypi.org/static/…er.abaf4b19.webp
https://pypi.org/static/…er.abaf4b19.webp
[ "https://pypi.org/static/images/logo-small.8998e9d1.svg", "https://pypi-camo.freetls.fastly.net/96aac0d362aaad7a810df98f54e229bb9cb45714/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f35376461346432653261353237303236626161616162333565363837326661353f73697a653d3530", "https://pypi-camo.freetls.fastly.net/96aac0d362aaad7a810df98f54e229bb9cb45714/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f35376461346432653261353237303236626161616162333565363837326661353f73697a653d3530", "https://pypi.org/static/images/blue-cube.572a5bfb.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi-camo.freetls.fastly.net/ed7074cadad1a06f56bc520ad9bd3e00d0704c5b/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6177732d77686974652d6c6f676f2d7443615473387a432e706e67", "https://pypi-camo.freetls.fastly.net/8855f7c063a3bdb5b0ce8d91bfc50cf851cc5c51/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f64617461646f672d77686974652d6c6f676f2d6668644c4e666c6f2e706e67", "https://pypi-camo.freetls.fastly.net/df6fe8829cbff2d7f668d98571df1fd011f36192/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f666173746c792d77686974652d6c6f676f2d65684d3077735f6f2e706e67", "https://pypi-camo.freetls.fastly.net/420cc8cf360bac879e24c923b2f50ba7d1314fb0/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f676f6f676c652d77686974652d6c6f676f2d616734424e3774332e706e67", "https://pypi-camo.freetls.fastly.net/524d1ce72f7772294ca4c1fe05d21dec8fa3f8ea/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6d6963726f736f66742d77686974652d6c6f676f2d5a443172685444462e706e67", "https://pypi-camo.freetls.fastly.net/d01053c02f3a626b73ffcb06b96367fdbbf9e230/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f70696e67646f6d2d77686974652d6c6f676f2d67355831547546362e706e67", "https://pypi-camo.freetls.fastly.net/67af7117035e2345bacb5a82e9aa8b5b3e70701d/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f73656e7472792d77686974652d6c6f676f2d4a2d6b64742d706e2e706e67", "https://pypi-camo.freetls.fastly.net/b611884ff90435a0575dbab7d9b0d3e60f136466/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f737461747573706167652d77686974652d6c6f676f2d5467476c6a4a2d502e706e67" ]
[]
[]
[ "" ]
null
[]
2024-03-25T20:24:49+00:00
A tool for autoadding simple type annotations.
en
/static/images/favicon.35549fe8.ico
PyPI
https://pypi.org/project/autotyping/
When I refactor code I often find myself tediously adding type annotations that are obvious from context: functions that don't return anything, boolean flags, etcetera. That's where autotyping comes in: it automatically adds those types and inserts the right annotations. Usage Here's how to use it: pip install autotyping python -m autotyping /path/to/my/code By default it does nothing; you have to add flags to make it do more transformations. The following are supported: Annotating return types: --none-return: add a -> None return type to functions without any return, yield, or raise in their body --scalar-return: add a return annotation to functions that only return literal bool, str, bytes, int, or float objects. Annotating parameter types: --bool-param: add a : bool annotation to any function parameter with a default of True or False --int-param, --float-param, --str-param, --bytes-param: add an annotation to any parameter for which the default is a literal int, float, str, or bytes object --annotate-optional foo:bar.Baz: for any parameter of the form foo=None, add Baz, imported from bar, as the type. For example, use --annotate-optional uid:my_types.Uid to annotate any uid in your codebase with a None default as Optional[my_types.Uid]. --annotate-named-param foo:bar.Baz: annotate any parameter with no default that is named foo with bar.Baz. For example, use --annotate-named-param uid:my_types.Uid to annotate any uid parameter in your codebase with no default as my_types.Uid. --guess-common-names: infer certain parameter types from their names based on common patterns in open-source Python code. For example, infer that a verbose parameter is of type bool. Annotating magical methods: --annotate-magics: add type annotation to certain magic methods. Currently this does the following: __str__ returns str __repr__ returns str __len__ returns int __length_hint__ returns int __init__ returns None __del__ returns None __bool__ returns bool __bytes__ returns bytes __format__ returns str __contains__ returns bool __complex__ returns complex __int__ returns int __float__ returns float __index__ returns int __exit__: the three parameters are Optional[Type[BaseException]], Optional[BaseException], and Optional[TracebackType] __aexit__: same as __exit__ --annotate-imprecise-magics: add imprecise type annotations for some additional magic methods. Currently this adds typing.Iterator return annotations to __iter__, __await__, and __reversed__. These annotations should have a generic parameter to indicate what you're iterating over, but that's too hard for autotyping to figure out. External integrations --pyanalyze-report: takes types suggested by pyanalyze's suggested_parameter_type and suggested_return_type codes and applies them. You can generate these with a command like: pyanalyze --json-output failures.json -e suggested_return_type -e suggested_parameter_type -v . --only-without-imports: only apply pyanalyze suggestions that do not require new imports. This is useful because suggestions that require imports may need more manual work. There are two shortcut flags to enable multiple transformations at once: --safe enables changes that should always be safe. This includes --none-return, --scalar-return, and --annotate-magics. --aggressive enables riskier changes that are more likely to produce new type checker errors. It includes all of --safe as well as --bool-param, --int-param, --float-param, --str-param, --bytes-param, and --annotate-imprecise-magics. LibCST Autotyping is built as a LibCST codemod; see the LibCST documentation for more information on how to use codemods. If you wish to run things through the libcst.tool interface, you can do this like so: Make sure you have a .libcst.codemod.yaml with 'autotyping' in the modules list. For an example, see the .libcst.codemod.yaml in this repo. Run python -m libcst.tool codemod autotyping.AutotypeCommand /path/to/my/code Limitations Autotyping is intended to be a simple tool that uses heuristics to find annotations that would be tedious to add by hand. The heuristics may fail, and after you run autotyping you should run a type checker to verify that the types it added are correct. Known limitations: autotyping does not model code flow through a function, so it may miss implicit None returns Changelog 24.3.0 (March 25, 2024) Add simpler ways to invoke autotyping. Now, it is possible to simply use python3 -m autotyping to invoke the tool. (Thanks to Shantanu Jain.) Drop support for Python 3.7; add support for Python 3.12. (Thanks to Hugo van Kemenade.) Infer return types for some more magic methods. (Thanks to Dhruv Manilawala.) 23.3.0 (March 3, 2023) Fix crash on certain argument names like iterables (contributed by Marco Gorelli) 23.2.0 (February 3, 2023) Add --guess-common-names (contributed by John Litborn) Fix the --safe and --aggressive flags so they don't take ignored arguments --length-hint should return int (contributed by Nikita Sobolev) Fix bug in import adding (contributed by Shantanu) 22.9.0 (September 5, 2022) Add --safe and --aggressive Add --pyanalyze-report Do not add None return types to methods marked with @abstractmethod and to methods in stub files Improve type inference: "string" % ... is always str b"bytes" % ... is always bytes An and or or operator where left and right sides are of the same type returns that type is, is not, in, and not in always return bool 21.12.0 (December 21, 2021)
8585
dbpedia
1
17
https://macblog.org/autopkg-icons/
en
Automatically Export and Generate App Icons in AutoPkg Recipes
https://macblog.org/proc…e1e150e69eb.webp
https://macblog.org/proc…e1e150e69eb.webp
[ "https://macblog.org/processed_images/all-icons.9697d1810d061335.webp", "https://macblog.org/processed_images/example-composited-icons.1354ec2d4ddba6a3.webp", "https://macblog.org/processed_images/default-template-icons.6ca9070ab9e182cb.webp", "https://macblog.org/processed_images/example-custom-template.4d23aedbf67233f8.webp", "https://macblog.org/processed_images/padding-and-position.923b044e40c8580d.webp", "https://macblog.org/processed_images/chrome-custom-example.4b9ce254eaff892c.webp" ]
[]
[]
[ "" ]
null
[ "macblog.org" ]
2022-01-31T00:00:00
AppIconExtractor examines an app and exports its icon as a PNG image file (reading the CFBundleIconFile property from an app's Info.plist and saving that image as a PNG file. Additionally, AppIconExtractor can create icon variations by compositing a secondary image on top of the app's icon.
en
/apple-touch-icon.png
MacBlog
https://macblog.org/autopkg-icons
I'm a stickler for including icons for all policies available in Jamf Pro's Self Service app. They help users find items in Self Service, and generally make the app easier to use. However, I don't like manually extracting icons from apps. It's easy enough with a tool like SAP's Icons app, but if I'm automating package and policy creation with AutoPkg, I should similarly be able to automate icon creation, right? I created the AppIconExtractor AutoPkg processor to fully automate this task. At it's core, AppIconExtractor examines an app and exports its icon as a PNG image file. More technically, it reads the CFBundleIconFile property from an app's Info.plist and saves that image as a PNG file at the path of your choice. Additionally, ApplIconExtractor can create icon variations by compositing a secondary image on top of the app's icon. This makes it simple to automatically create a version of an icon with a destructive "red X" icon superimposed over the app icon for use in uninstallation policies, or a version with an "update" graphic for use in policies that update an app. Add my recipes and install the Pillow library First, you'll need my recipe repository available to your local AutoPkg installation. Add it with autopkg repo-add haircut-recipes. AppIconExtractor requires installation of the Pillow Python library. Pillow is used to convert and composite icons, and can be easily installed on the Mac you use to run AutoPkg. Use this command: /usr/local/autopkg/python -m pip install --upgrade Pillow Note that this installs the Pillow library within the path of AutoPkg's Python framework. This is very important. If you just run pip or pip3 without the explicit path to AutoPkg's Python installation, AutoPkg won't be able to find the library. Recipes will produce an error directing you to install Pillow using the specifc command above. With Pillow installed, you're ready to go. Basic use Using AppIconExtractor is as simple as including the processor as a step in a recipe's Process dictionary. Use the shared processor syntax to call com.github.haircut.processors/AppIconExtractor. It requires only one argument: source_app, which is the path to the .app from which to extract an icon. If the path to the app points inside a disk image, that .dmg will be mounted automatically. By default, the app's icon will be output to the recipe's cache directory as %NAME%.png%. You can optionally override this output path (and filename) by setting the icon_output_path argument. A simple example in XML format might look like: <key>Process</key> <array> <dict> <key>Processor</key> <string>com.github.haircut.processors/AppIconExtractor</string> <key>Arguments</key> <dict> <key>source_app</key> <string>%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app</string> </dict> </dict> </array> This will extract the icon from "CoolApp.app" and save it as a 256px square PNG image to the recipe cache directory. As mentioned, adding the icon_output_path argument will give you additional control over the output path and filename. Here's an example in YAML format: Process: - Processor: com.github.haircut.processors/AppIconExtractor Arguments: source_app: "%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app" icon_output_path: "%RECIPE_CACHE_DIR%/Icons/Icon-%NAME%.png" Generating composited variations Beyond extracting the app's icon, AppIconExtractor can also create variation images by compositing a "template image" on top of the app icon. The processor can output variations for an "uninstall," "update," and "install" version of the app icon. To generate a variation, add a processor argument to set an output path for that variation. Use one or more of the following arguments: composite_install_path composite_update_path composite_uninstall_path Omit any variations you don't want to generate. The processor will only create the variations you request be specify an output path. If you specify only output paths for variations, AppIconExtractor will use sensible defaults to composite suitable icons. The default templates are glyphs from SF Symbols that will work well in most situations. Each template is 64px in size, and looks nice in the corner. These templates are encoded within the processor; you don't need to do anything to use these defaults! Here's an example in YAML format: Process: - Processor: com.github.haircut.processors/AppIconExtractor Arguments: source_app: "%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app" icon_output_path: "%RECIPE_CACHE_DIR%/Icons/Icon-%NAME%.png" composite_update_path: "%RECIPE_CACHE_DIR%/Icons/Update-%NAME%.png" composite_uninstall_path: ""%RECIPE_CACHE_DIR%/Icons/Uninstall-%NAME%.png" Notice that we included arguments for the "update" and "uninstall" variations, but did not set the composite_install_path argument. This would output the "bare" app icon as well as variations for "update" and "uninstall" – but no "install" variation, since we omitted that argument. Custom templates If you don't like the default variation templates, you can use your own by setting composite_install_template, composite_update_template and/or composite_unsinstall_template. Each argument should be the path to alternative template image to use for that variation. AppIconExtractor will calculate the size of the template image at the path you specify and correctly anchor that template to the composite_position (see "Padding and position" below). Here's an example of using a custom template to generate an "uninstall" variation in XML format: <key>Process</key> <array> <dict> <key>Processor</key> <string>com.github.haircut.processors/AppIconExtractor</string> <key>Arguments</key> <dict> <key>source_app</key> <string>%RECIPE_CACHE_DIR%/CoolApp/CoolApp.app</string> <key>composite_uninstall_template</key> <string>%RECIPE_DIR%/radical-flame.png</string> <key>composite_uninstall_path</key> <string>%RECIPE_CACHE_DIR%/delete_%NAME%.png</string> </dict> </dict> </array> Padding and position AppIconExtractor includes a few additional options to customize your composited icon variations. composite_padding: sets the number of pixels from the edge of the image the superimposed template image is offset. Defaults to 10 pixels. composite_position: sets the corner to which the superimposed template image for composited variations is anchored. Defaults to br for the bottom-right corner. You can change this to bl (bottom left), ur (upper right), or ul (upper left) if you prefer. Combinations of these options are shown below with the padding highlighted in pink: Clockwise from the upper left, this example shows: composite_padding omitted (so it defaults to 10) and composite_position of ul. composite_padding of 20 and composite_position of ur. composite_padding of 0 and composite_position omitted (so it defaults to br). composite_padding of 5 and composite_position of bl. Setting these options applies the same settings to all composited variations. This is an intentional design choice to keep the input arguments – and thus the required code – more manageable. Output variables AppIconExtractor sets the path(s) to the extracted app icon, and any composited variations, as output variables during an AutoPkg run. This means you can extract and generate icons, then immediately use those icons in subsequent processors like JamfPolicyUploader. The following output variables are set if (and only if) the associated variations are requested: app_icon_path: path to the extracted, unmodified app icon. Always set. install_icon_path path to the composited "install" variation. Only set if this variations is requested. update_icon_path path to the composited "update" variation. Only set if this variations is requested. uninstall_icon_path path to the composited "uninstall" variation. Only set if this variations is requested. Example uses in recipes Here are two examples of using AppIconExtractor in a child recipe or override. Extract the icon from an available .app bundle Greg Neagle's recipe for Sublime Text 4 leaves the unarchived .app available in the recipe cache dir at %RECIPE_CACHE_DIR%/%NAME%/Sublime Text.app. We'll use this to extract the Sublime Text icon using AppIconExtractor's default settings. Process: - Processor: com.github.haircut.processors/AppIconExtractor Arguments: source_app: "%RECIPE_CACHE_DIR%/%NAME%/Sublime Text.app" - Processor: com.github.grahampugh.jamf-upload.processors/JamfPolicyUploader Arguments: policy_template: "%POLICY_TEMPLATE%" policy_name: "%POLICY_NAME%" icon: "%app_icon_path%" replace_icon: True This extracts Sublime Text's icon without generating any composite variations, then feeds that extracted icon to the JamfPolicyUploader processor. We set replace_icon to True to ensure any change to the icon by the vendor is automatically reflected within our Jamf policy. Unpacking a .pkg to extract an icon The recipe for the Google Chrome Enterprise package downloads a .pkg directly from the vendor, so no repackaging is needed. And while the AutoPkg recipe unpacks the package to performe code signature verification, it then runs the PathDeleter processor to clean up that operation. This means a child recipe does not have access to a .app from which to extract an icon. That means we'll need to do a little more work to unpack the package again so that we can get to the app bundle. We'll also generate a custom "uninstall" variations and override the default composition position and padding. Here's the Process of this more complex example in XML format: <key>Process</key> <array> <dict> <key>Processor</key> <string>FlatPkgUnpacker</string> <key>Arguments</key> <dict> <key>destination_path</key> <string>%RECIPE_CACHE_DIR%/unpack</string> <key>flat_pkg_path</key> <string>%pkg_path%</string> </dict> </dict> <dict> <key>Processor</key> <string>PkgPayloadUnpacker</string> <key>Arguments</key> <dict> <key>destination_path</key> <string>%RECIPE_CACHE_DIR%/unpack/pkgpayload</string> <key>pkg_payload_path</key> <string>%RECIPE_CACHE_DIR%/unpack/GoogleChrome.pkg/Payload</string> </dict> </dict> <dict> <key>Processor</key> <string>com.github.haircut.processors/AppIconExtractor</string> <key>Arguments</key> <dict> <key>composite_padding</key> <integer>20</integer> <key>composite_position</key> <string>ul</string> <key>composite_uninstall_path</key> <string>%RECIPE_CACHE_DIR%/Icon-Uninstall-%NAME%.png</string> <key>composite_uninstall_template</key> <string>/Users/haircut/Documents/delete.png</string> <key>icon_output_path</key> <string>%RECIPE_CACHE_DIR%/Icon-%NAME%.png</string> <key>source_app</key> <string>%RECIPE_CACHE_DIR%/unpack/pkgpayload/Google Chrome.app</string> </dict> </dict> <dict> <key>Processor</key> <string>PathDeleter</string> <key>Arguments</key> <dict> <key>path_list</key> <array> <string>%RECIPE_CACHE_DIR%/unpack</string> </array> </dict> </dict> <dict> <key>Processor</key> <string>com.github.grahampugh.jamf-upload.processors/JamfPolicyUploader</string> <key>Arguments</key> <dict> <key>icon</key> <string>%app_icon_path%</string> <key>policy_name</key> <string>Install %NAME%</string> <key>policy_template</key> <string>Self-Service-Policy.xml</string> <key>replace_icon</key> <true/> </dict> </dict> <dict> <key>Processor</key> <string>com.github.grahampugh.jamf-upload.processors/JamfPolicyUploader</string> <key>Arguments</key> <dict> <key>icon</key> <string>%uninstall_icon_path%</string> <key>policy_name</key> <string>Uninstall %NAME%</string> <key>policy_template</key> <string>Uninstall-Policy.xml</string> <key>replace_icon</key> <true/> </dict> </dict> </array> This unpacks the Google Chrome enterprise package, extracts the unmodified app icon, and generates an "uninstall" composite version with a custom graphic in the upper left corner with 20px of padding. The outputs of AppIconExtractor are then used as inputs to JamfPolicyUploader process runs to set the icons for two different policies. Setting the replace_icon argument to True ensures that any changes to the icons are reflected on the Jamf Pro policies. Hopefully this processor will help you extract icons without the manual work, and spiff up those Self Service policies.