Kye Gomez commited on
Commit
5d10206
·
0 Parent(s):

Initial commit

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .github/FUNDING.yml +13 -0
  2. .github/ISSUE_TEMPLATE/bug_report.md +27 -0
  3. .github/ISSUE_TEMPLATE/feature_request.md +20 -0
  4. .github/PULL_REQUEST_TEMPLATE.yml +22 -0
  5. .github/dependabot.yml +14 -0
  6. .github/labeler.yml +54 -0
  7. .github/workflows/code_quality_control.yml +30 -0
  8. .github/workflows/cos_integration.yml +42 -0
  9. .github/workflows/docs.yml +19 -0
  10. .github/workflows/docs_test.yml +28 -0
  11. .github/workflows/label.yml +22 -0
  12. .github/workflows/lints.yml +25 -0
  13. .github/workflows/pr_request_checks.yml +27 -0
  14. .github/workflows/pull-request-links.yml +18 -0
  15. .github/workflows/pylint.yml +23 -0
  16. .github/workflows/python-publish.yml +32 -0
  17. .github/workflows/quality.yml +23 -0
  18. .github/workflows/ruff.yml +8 -0
  19. .github/workflows/run_test.yml +23 -0
  20. .github/workflows/stale.yml +27 -0
  21. .github/workflows/test.yml +48 -0
  22. .github/workflows/testing.yml +25 -0
  23. .github/workflows/unit-test.yml +33 -0
  24. .github/workflows/welcome.yml +19 -0
  25. .gitignore +163 -0
  26. .pre-commit-config.yaml +18 -0
  27. .readthedocs.yml +13 -0
  28. Dockerfile +25 -0
  29. LICENSE +21 -0
  30. Makefile +22 -0
  31. README.md +67 -0
  32. docs/.DS_Store +0 -0
  33. docs/applications/customer_support.md +42 -0
  34. docs/applications/enterprise.md +0 -0
  35. docs/applications/marketing_agencies.md +64 -0
  36. docs/architecture.md +6 -0
  37. docs/assets/css/extra.css +7 -0
  38. docs/bounties.md +86 -0
  39. docs/contributing.md +123 -0
  40. docs/demos.md +8 -0
  41. docs/design.md +152 -0
  42. docs/examples/count-tokens.md +29 -0
  43. docs/examples/index.md +3 -0
  44. docs/examples/load-and-query-pinecone.md +49 -0
  45. docs/examples/load-query-and-chat-marqo.md +51 -0
  46. docs/examples/query-webpage.md +23 -0
  47. docs/examples/store-conversation-memory-in-dynamodb.md +47 -0
  48. docs/examples/talk-to-a-pdf.md +37 -0
  49. docs/examples/talk-to-a-webpage.md +50 -0
  50. docs/examples/talk-to-redshift.md +46 -0
.github/FUNDING.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # These are supported funding model platforms
2
+
3
+ github: [kyegomez]
4
+ patreon: # Replace with a single Patreon username
5
+ open_collective: # Replace with a single Open Collective username
6
+ ko_fi: # Replace with a single Ko-fi username
7
+ tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
8
+ community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
9
+ liberapay: # Replace with a single Liberapay username
10
+ issuehunt: # Replace with a single IssueHunt username
11
+ otechie: # Replace with a single Otechie username
12
+ lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
13
+ custom: #Nothing
.github/ISSUE_TEMPLATE/bug_report.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Bug report
3
+ about: Create a detailed report on the bug and it's root cause. Conduct root cause error analysis
4
+ title: "[BUG] "
5
+ labels: bug
6
+ assignees: kyegomez
7
+
8
+ ---
9
+
10
+ **Describe the bug**
11
+ A clear and concise description of what the bug is and what the main root cause error is. Test very thoroughly before submitting.
12
+
13
+ **To Reproduce**
14
+ Steps to reproduce the behavior:
15
+ 1. Go to '...'
16
+ 2. Click on '....'
17
+ 3. Scroll down to '....'
18
+ 4. See error
19
+
20
+ **Expected behavior**
21
+ A clear and concise description of what you expected to happen.
22
+
23
+ **Screenshots**
24
+ If applicable, add screenshots to help explain your problem.
25
+
26
+ **Additional context**
27
+ Add any other context about the problem here.
.github/ISSUE_TEMPLATE/feature_request.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Feature request
3
+ about: Suggest an idea for this project
4
+ title: ''
5
+ labels: ''
6
+ assignees: 'kyegomez'
7
+
8
+ ---
9
+
10
+ **Is your feature request related to a problem? Please describe.**
11
+ A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12
+
13
+ **Describe the solution you'd like**
14
+ A clear and concise description of what you want to happen.
15
+
16
+ **Describe alternatives you've considered**
17
+ A clear and concise description of any alternative solutions or features you've considered.
18
+
19
+ **Additional context**
20
+ Add any other context or screenshots about the feature request here.
.github/PULL_REQUEST_TEMPLATE.yml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- Thank you for contributing to Zeta!
2
+
3
+ Replace this comment with:
4
+ - Description: a description of the change,
5
+ - Issue: the issue # it fixes (if applicable),
6
+ - Dependencies: any dependencies required for this change,
7
+ - Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
8
+ - Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
9
+
10
+ If you're adding a new integration, please include:
11
+ 1. a test for the integration, preferably unit tests that do not rely on network access,
12
+ 2. an example notebook showing its use.
13
+
14
+ Maintainer responsibilities:
15
+ - nn / Misc / if you don't know who to tag: kye@apac.ai
16
+ - tokenizers: kye@apac.ai
17
+ - training / Prompts: kye@apac.ai
18
+ - models: kye@apac.ai
19
+
20
+ If no one reviews your PR within a few days, feel free to kye@apac.ai
21
+
22
+ See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/kyegomez/zeta
.github/dependabot.yml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates
2
+
3
+ version: 2
4
+ updates:
5
+ - package-ecosystem: "github-actions"
6
+ directory: "/"
7
+ schedule:
8
+ interval: "weekly"
9
+
10
+ - package-ecosystem: "pip"
11
+ directory: "/"
12
+ schedule:
13
+ interval: "weekly"
14
+
.github/labeler.yml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # this is a config file for the github action labeler
2
+
3
+ # Add 'root' label to any root file changes
4
+ # Quotation marks are required for the leading asterisk
5
+ root:
6
+ - changed-files:
7
+ - any-glob-to-any-file: '*'
8
+
9
+ # Add 'Documentation' label to any changes within 'docs' folder or any subfolders
10
+ Documentation:
11
+ - changed-files:
12
+ - any-glob-to-any-file: docs/**
13
+
14
+ # Add 'Tests' label to any file changes within 'docs' folder
15
+ Tests:
16
+ - changed-files:
17
+ - any-glob-to-any-file: tests/*
18
+
19
+ # Add 'Documentation' label to any file changes within 'docs' or 'guides' folders
20
+ ghactions:
21
+ - changed-files:
22
+ - any-glob-to-any-file:
23
+ - .github/workflows/*
24
+ - .github/*
25
+
26
+ # Add 'Scripts' label to any file changes within 'docs' folder
27
+ Scripts:
28
+ - changed-files:
29
+ - any-glob-to-any-file: scripts/*
30
+
31
+ ## Equivalent of the above mentioned configuration using another syntax
32
+ Documentation:
33
+ - changed-files:
34
+ - any-glob-to-any-file: ['docs/*', 'guides/*']
35
+
36
+ # Add 'Documentation' label to any change to .md files within the entire repository
37
+ Documentation:
38
+ - changed-files:
39
+ - any-glob-to-any-file: '**/*.md'
40
+
41
+ # Add 'source' label to any change to src files within the source dir EXCEPT for the docs sub-folder
42
+ source:
43
+ - all:
44
+ - changed-files:
45
+ - any-glob-to-any-file: 'src/**/*'
46
+ - all-globs-to-all-files: '!src/docs/*'
47
+
48
+ # Add 'feature' label to any PR where the head branch name starts with `feature` or has a `feature` section in the name
49
+ feature:
50
+ - head-branch: ['^feature', 'feature']
51
+
52
+ # Add 'release' label to any PR that is opened against the `main` branch
53
+ release:
54
+ - base-branch: 'main'
.github/workflows/code_quality_control.yml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Linting and Formatting
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+
8
+ jobs:
9
+ lint_and_format:
10
+ runs-on: ubuntu-latest
11
+
12
+ steps:
13
+ - name: Checkout code
14
+ uses: actions/checkout@v4
15
+
16
+ - name: Set up Python
17
+ uses: actions/setup-python@v5
18
+ with:
19
+ python-version: '3.10'
20
+
21
+ - name: Install dependencies
22
+ run: pip install --no-cache-dir -r requirements.txt
23
+
24
+ - name: Find Python files
25
+ run: find swarms_torch -name "*.py" -type f -exec autopep8 --in-place --aggressive --aggressive {} +
26
+
27
+ - name: Push changes
28
+ uses: ad-m/github-push-action@master
29
+ with:
30
+ github_token: ${{ secrets.GITHUB_TOKEN }}
.github/workflows/cos_integration.yml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Continuous Integration
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+
8
+ jobs:
9
+ test:
10
+ runs-on: ubuntu-latest
11
+ steps:
12
+ - name: Checkout code
13
+ uses: actions/checkout@v4
14
+
15
+ - name: Set up Python
16
+ uses: actions/setup-python@v5
17
+ with:
18
+ python-version: '3.10'
19
+
20
+ - name: Install dependencies
21
+ run: pip install --no-cache-dir -r requirements.txt
22
+
23
+ - name: Run unit tests
24
+ run: pytest tests/unit
25
+
26
+ - name: Run integration tests
27
+ run: pytest tests/integration
28
+
29
+ - name: Run code coverage
30
+ run: pytest --cov=swarms tests/
31
+
32
+ - name: Run linters
33
+ run: pylint swarms
34
+
35
+ - name: Build documentation
36
+ run: make docs
37
+
38
+ - name: Validate documentation
39
+ run: sphinx-build -b linkcheck docs build/docs
40
+
41
+ - name: Run performance tests
42
+ run: pytest tests/performance
.github/workflows/docs.yml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Docs WorkFlow
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - master
7
+ - main
8
+ - develop
9
+ jobs:
10
+ deploy:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - uses: actions/checkout@v4
14
+ - uses: actions/setup-python@v5
15
+ with:
16
+ python-version: '3.10'
17
+ - run: pip install mkdocs-material
18
+ - run: pip install "mkdocstrings[python]"
19
+ - run: mkdocs gh-deploy --force
.github/workflows/docs_test.yml ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Documentation Tests
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - master
7
+
8
+ jobs:
9
+ test:
10
+ runs-on: ubuntu-latest
11
+
12
+ steps:
13
+ - name: Checkout code
14
+ uses: actions/checkout@v4
15
+
16
+ - name: Set up Python
17
+ uses: actions/setup-python@v5
18
+ with:
19
+ python-version: '3.10'
20
+
21
+ - name: Install dependencies
22
+ run: pip install --no-cache-dir -r requirements.txt
23
+
24
+ - name: Build documentation
25
+ run: make docs
26
+
27
+ - name: Validate documentation
28
+ run: sphinx-build -b linkcheck docs build/docs
.github/workflows/label.yml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This workflow will triage pull requests and apply a label based on the
2
+ # paths that are modified in the pull request.
3
+ #
4
+ # To use this workflow, you will need to set up a .github/labeler.yml
5
+ # file with configuration. For more information, see:
6
+ # https://github.com/actions/labeler
7
+
8
+ name: Labeler
9
+ on: [pull_request_target]
10
+
11
+ jobs:
12
+ label:
13
+
14
+ runs-on: ubuntu-latest
15
+ permissions:
16
+ contents: read
17
+ pull-requests: write
18
+
19
+ steps:
20
+ - uses: actions/labeler@v5.0.0
21
+ with:
22
+ repo-token: "${{ secrets.GITHUB_TOKEN }}"
.github/workflows/lints.yml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Linting
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - master
7
+
8
+ jobs:
9
+ lint:
10
+ runs-on: ubuntu-latest
11
+
12
+ steps:
13
+ - name: Checkout code
14
+ uses: actions/checkout@v4
15
+
16
+ - name: Set up Python
17
+ uses: actions/setup-python@v5
18
+ with:
19
+ python-version: '3.10'
20
+
21
+ - name: Install dependencies
22
+ run: pip install --no-cache-dir -r requirements.txt
23
+
24
+ - name: Run linters
25
+ run: pylint swarms_torch
.github/workflows/pr_request_checks.yml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Pull Request Checks
2
+
3
+ on:
4
+ pull_request:
5
+ branches:
6
+ - master
7
+
8
+ jobs:
9
+ test:
10
+ runs-on: ubuntu-latest
11
+
12
+ steps:
13
+ - name: Checkout code
14
+ uses: actions/checkout@v4
15
+
16
+ - name: Set up Python
17
+ uses: actions/setup-python@v5
18
+ with:
19
+ python-version: '3.10'
20
+
21
+ - name: Install dependencies
22
+ run: pip install --no-cache-dir -r requirements.txt
23
+
24
+ - name: Run tests and checks
25
+ run: |
26
+ pytest tests/
27
+ pylint swarms_torch
.github/workflows/pull-request-links.yml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: readthedocs/actions
2
+ on:
3
+ pull_request_target:
4
+ types:
5
+ - opened
6
+ paths:
7
+ - "docs/**"
8
+
9
+ permissions:
10
+ pull-requests: write
11
+
12
+ jobs:
13
+ pull-request-links:
14
+ runs-on: ubuntu-latest
15
+ steps:
16
+ - uses: readthedocs/actions/preview@v1
17
+ with:
18
+ project-slug: swarms_torch
.github/workflows/pylint.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Pylint
2
+
3
+ on: [push]
4
+
5
+ jobs:
6
+ build:
7
+ runs-on: ubuntu-latest
8
+ strategy:
9
+ matrix:
10
+ python-version: ["3.9", "3.10"]
11
+ steps:
12
+ - uses: actions/checkout@v4
13
+ - name: Set up Python ${{ matrix.python-version }}
14
+ uses: actions/setup-python@v5
15
+ with:
16
+ python-version: ${{ matrix.python-version }}
17
+ - name: Install dependencies
18
+ run: |
19
+ python -m pip install --no-cache-dir --upgrade pip
20
+ pip install pylint
21
+ - name: Analysing the code with pylint
22
+ run: |
23
+ pylint $(git ls-files '*.py')
.github/workflows/python-publish.yml ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ name: Upload Python Package
3
+
4
+ on:
5
+ release:
6
+ types: [published]
7
+
8
+ permissions:
9
+ contents: read
10
+
11
+ jobs:
12
+ deploy:
13
+
14
+ runs-on: ubuntu-latest
15
+
16
+ steps:
17
+ - uses: actions/checkout@v4
18
+ - name: Set up Python
19
+ uses: actions/setup-python@v5
20
+ with:
21
+ python-version: '3.10'
22
+ - name: Install dependencies
23
+ run: |
24
+ python -m pip install --no-cache-dir --upgrade pip
25
+ pip install build
26
+ - name: Build package
27
+ run: python -m build
28
+ - name: Publish package
29
+ uses: pypa/gh-action-pypi-publish@ec4db0b4ddc65acdf4bff5fa45ac92d78b56bdf0
30
+ with:
31
+ user: __token__
32
+ password: ${{ secrets.PYPI_API_TOKEN }}
.github/workflows/quality.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Quality
2
+
3
+ on:
4
+ push:
5
+ branches: [ "main" ]
6
+ pull_request:
7
+ branches: [ "main" ]
8
+
9
+ jobs:
10
+ lint:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ fail-fast: false
14
+ steps:
15
+ - name: Checkout actions
16
+ uses: actions/checkout@v4
17
+ with:
18
+ fetch-depth: 0
19
+ - name: Init environment
20
+ uses: ./.github/actions/init-environment
21
+ - name: Run linter
22
+ run: |
23
+ pylint `git diff --name-only --diff-filter=d origin/main HEAD | grep -E '\.py$' | tr '\n' ' '`
.github/workflows/ruff.yml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ name: Ruff
2
+ on: [ push, pull_request ]
3
+ jobs:
4
+ ruff:
5
+ runs-on: ubuntu-latest
6
+ steps:
7
+ - uses: actions/checkout@v4
8
+ - uses: chartboost/ruff-action@v1
.github/workflows/run_test.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Python application test
2
+
3
+ on: [push]
4
+
5
+ jobs:
6
+ build:
7
+
8
+ runs-on: ubuntu-latest
9
+
10
+ steps:
11
+ - uses: actions/checkout@v4
12
+ - name: Set up Python 3.10
13
+ uses: actions/setup-python@v5
14
+ with:
15
+ python-version: '3.10'
16
+ - name: Install dependencies
17
+ run: |
18
+ python -m pip install --no-cache-dir --upgrade pip
19
+ pip install pytest
20
+ if [ -f requirements.txt ]; then pip install --no-cache-dir -r requirements.txt; fi
21
+ - name: Run tests with pytest
22
+ run: |
23
+ pytest tests/
.github/workflows/stale.yml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
2
+ #
3
+ # You can adjust the behavior by modifying this file.
4
+ # For more information, see:
5
+ # https://github.com/actions/stale
6
+ name: Mark stale issues and pull requests
7
+
8
+ on:
9
+ schedule:
10
+ - cron: '26 12 * * *'
11
+
12
+ jobs:
13
+ stale:
14
+
15
+ runs-on: ubuntu-latest
16
+ permissions:
17
+ issues: write
18
+ pull-requests: write
19
+
20
+ steps:
21
+ - uses: actions/stale@v9
22
+ with:
23
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
24
+ stale-issue-message: 'Stale issue message'
25
+ stale-pr-message: 'Stale pull request message'
26
+ stale-issue-label: 'no-issue-activity'
27
+ stale-pr-label: 'no-pr-activity'
.github/workflows/test.yml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: test
2
+
3
+ on:
4
+ push:
5
+ branches: [master]
6
+ pull_request:
7
+ workflow_dispatch:
8
+
9
+ env:
10
+ POETRY_VERSION: "1.4.2"
11
+
12
+ jobs:
13
+ build:
14
+ runs-on: ubuntu-latest
15
+ strategy:
16
+ matrix:
17
+ python-version:
18
+ - "3.9"
19
+ - "3.10"
20
+ - "3.11"
21
+ test_type:
22
+ - "core"
23
+ - "extended"
24
+ name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
25
+ steps:
26
+ - uses: actions/checkout@v4
27
+ - name: Set up Python ${{ matrix.python-version }}
28
+ uses: "./.github/actions/poetry_setup"
29
+ with:
30
+ python-version: ${{ matrix.python-version }}
31
+ poetry-version: "1.4.2"
32
+ cache-key: ${{ matrix.test_type }}
33
+ install-command: |
34
+ if [ "${{ matrix.test_type }}" == "core" ]; then
35
+ echo "Running core tests, installing dependencies with poetry..."
36
+ poetry install
37
+ else
38
+ echo "Running extended tests, installing dependencies with poetry..."
39
+ poetry install -E extended_testing
40
+ fi
41
+ - name: Run ${{matrix.test_type}} tests
42
+ run: |
43
+ if [ "${{ matrix.test_type }}" == "core" ]; then
44
+ make test
45
+ else
46
+ make extended_tests
47
+ fi
48
+ shell: bash
.github/workflows/testing.yml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Unit Tests
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - master
7
+
8
+ jobs:
9
+ test:
10
+ runs-on: ubuntu-latest
11
+
12
+ steps:
13
+ - name: Checkout code
14
+ uses: actions/checkout@v4
15
+
16
+ - name: Set up Python
17
+ uses: actions/setup-python@v5
18
+ with:
19
+ python-version: '3.10'
20
+
21
+ - name: Install dependencies
22
+ run: pip install --no-cache-dir -r requirements.txt
23
+
24
+ - name: Run unit tests
25
+ run: pytest tests/
.github/workflows/unit-test.yml ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: build
2
+
3
+ on:
4
+ push:
5
+ branches: [ main ]
6
+ pull_request:
7
+ branches: [ main ]
8
+
9
+ jobs:
10
+
11
+ build:
12
+
13
+ runs-on: ubuntu-latest
14
+
15
+ steps:
16
+ - uses: actions/checkout@v4
17
+
18
+ - name: Setup Python
19
+ uses: actions/setup-python@v5
20
+ with:
21
+ python-version: '3.10'
22
+
23
+ - name: Install dependencies
24
+ run: pip install --no-cache-dir -r requirements.txt
25
+
26
+ - name: Run Python unit tests
27
+ run: python3 -m unittest tests/
28
+
29
+ - name: Verify that the Docker image for the action builds
30
+ run: docker build . --file Dockerfile
31
+
32
+ - name: Verify integration test results
33
+ run: python3 -m unittest tests/
.github/workflows/welcome.yml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Welcome WorkFlow
2
+
3
+ on:
4
+ issues:
5
+ types: [opened]
6
+ pull_request_target:
7
+ types: [opened]
8
+
9
+ jobs:
10
+ build:
11
+ name: 👋 Welcome
12
+ permissions: write-all
13
+ runs-on: ubuntu-latest
14
+ steps:
15
+ - uses: actions/first-interaction@v1.3.0
16
+ with:
17
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
18
+ issue-message: "Hello there, thank you for opening an Issue ! 🙏🏻 The team was notified and they will get back to you asap."
19
+ pr-message: "Hello there, thank you for opening an PR ! 🙏🏻 The team was notified and they will get back to you asap."
.gitignore ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+ .vscode/
9
+ .vscode
10
+
11
+ # Distribution / packaging
12
+ .Python
13
+ build/
14
+ develop-eggs/
15
+ dist/
16
+ downloads/
17
+ eggs/
18
+ .eggs/
19
+ lib/
20
+ lib64/
21
+ parts/
22
+ .ruff_cache/
23
+ sdist/
24
+ var/
25
+ wheels/
26
+ share/python-wheels/
27
+ *.egg-info/
28
+ .installed.cfg
29
+ *.egg
30
+ MANIFEST
31
+
32
+ # PyInstaller
33
+ # Usually these files are written by a python script from a template
34
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
35
+ *.manifest
36
+ *.spec
37
+
38
+ # Installer logs
39
+ pip-log.txt
40
+ pip-delete-this-directory.txt
41
+
42
+ # Unit test / coverage reports
43
+ htmlcov/
44
+ .tox/
45
+ .nox/
46
+ .coverage
47
+ .coverage.*
48
+ .cache
49
+ nosetests.xml
50
+ coverage.xml
51
+ *.cover
52
+ *.py,cover
53
+ .hypothesis/
54
+ .pytest_cache/
55
+ cover/
56
+
57
+ # Translations
58
+ *.mo
59
+ *.pot
60
+
61
+ # Django stuff:
62
+ *.log
63
+ local_settings.py
64
+ db.sqlite3
65
+ db.sqlite3-journal
66
+
67
+ # Flask stuff:
68
+ instance/
69
+ .webassets-cache
70
+
71
+ # Scrapy stuff:
72
+ .scrapy
73
+
74
+ # Sphinx documentation
75
+ docs/_build/
76
+
77
+ # PyBuilder
78
+ .pybuilder/
79
+ target/
80
+
81
+ # Jupyter Notebook
82
+ .ipynb_checkpoints
83
+
84
+ # IPython
85
+ profile_default/
86
+ ipython_config.py
87
+
88
+ # pyenv
89
+ # For a library or package, you might want to ignore these files since the code is
90
+ # intended to run in multiple environments; otherwise, check them in:
91
+ # .python-version
92
+
93
+ # pipenv
94
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
95
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
96
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
97
+ # install all needed dependencies.
98
+ #Pipfile.lock
99
+
100
+ # poetry
101
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
102
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
103
+ # commonly ignored for libraries.
104
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
105
+ #poetry.lock
106
+
107
+ # pdm
108
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
109
+ #pdm.lock
110
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
111
+ # in version control.
112
+ # https://pdm.fming.dev/#use-with-ide
113
+ .pdm.toml
114
+
115
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
116
+ __pypackages__/
117
+
118
+ # Celery stuff
119
+ celerybeat-schedule
120
+ celerybeat.pid
121
+
122
+ # SageMath parsed files
123
+ *.sage.py
124
+
125
+ # Environments
126
+ .env
127
+ .venv
128
+ env/
129
+ venv/
130
+ ENV/
131
+ env.bak/
132
+ venv.bak/
133
+
134
+ # Spyder project settings
135
+ .spyderproject
136
+ .spyproject
137
+
138
+ # Rope project settings
139
+ .ropeproject
140
+
141
+ # mkdocs documentation
142
+ /site
143
+
144
+ # mypy
145
+ .mypy_cache/
146
+ .dmypy.json
147
+ dmypy.json
148
+
149
+ # Pyre type checker
150
+ .pyre/
151
+
152
+ # pytype static type analyzer
153
+ .pytype/
154
+
155
+ # Cython debug symbols
156
+ cython_debug/
157
+
158
+ # PyCharm
159
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
160
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
161
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
162
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
163
+ #.idea/
.pre-commit-config.yaml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ - repo: https://github.com/ambv/black
3
+ rev: 22.3.0
4
+ hooks:
5
+ - id: black
6
+ - repo: https://github.com/charliermarsh/ruff-pre-commit
7
+ rev: 'v0.0.255'
8
+ hooks:
9
+ - id: ruff
10
+ args: [--fix]
11
+ - repo: https://github.com/nbQA-dev/nbQA
12
+ rev: 1.6.3
13
+ hooks:
14
+ - id: nbqa-black
15
+ additional_dependencies: [ipython==8.12, black]
16
+ - id: nbqa-ruff
17
+ args: ["--ignore=I001"]
18
+ additional_dependencies: [ipython==8.12, ruff]
.readthedocs.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: 2
2
+
3
+ build:
4
+ os: ubuntu-22.04
5
+ tools:
6
+ python: "3.11"
7
+
8
+ mkdocs:
9
+ configuration: mkdocs.yml
10
+
11
+ python:
12
+ install:
13
+ - requirements: requirements.txt
Dockerfile ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ==================================
2
+ # Use an official Python runtime as a parent image
3
+ FROM python:3.10-slim
4
+ RUN apt-get update && apt-get -y install libgl1-mesa-dev libglib2.0-0 build-essential; apt-get clean
5
+ RUN pip install opencv-contrib-python-headless
6
+
7
+ # Set environment variables
8
+ ENV PYTHONDONTWRITEBYTECODE 1
9
+ ENV PYTHONUNBUFFERED 1
10
+
11
+ # Set the working directory in the container
12
+ WORKDIR /usr/src/zeta
13
+
14
+
15
+ # Install Python dependencies
16
+ # COPY requirements.txt and pyproject.toml if you're using poetry for dependency management
17
+ COPY requirements.txt .
18
+ RUN pip install --no-cache-dir --upgrade pip
19
+ RUN pip install --no-cache-dir -r requirements.txt
20
+
21
+ RUN pip install --no-cache-dir zetascale
22
+
23
+ # Copy the rest of the application
24
+ COPY . .
25
+
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2023 Eternal Reclaimer
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
Makefile ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: style check_code_quality
2
+
3
+ export PYTHONPATH = .
4
+ check_dirs := src
5
+
6
+ style:
7
+ black $(check_dirs)
8
+ isort --profile black $(check_dirs)
9
+
10
+ check_code_quality:
11
+ black --check $(check_dirs)
12
+ isort --check-only --profile black $(check_dirs)
13
+ # stop the build if there are Python syntax errors or undefined names
14
+ flake8 $(check_dirs) --count --select=E9,F63,F7,F82 --show-source --statistics
15
+ # exit-zero treats all errors as warnings. E203 for black, E501 for docstring, W503 for line breaks before logical operators
16
+ flake8 $(check_dirs) --count --max-line-length=88 --exit-zero --ignore=D --extend-ignore=E203,E501,W503 --statistics
17
+
18
+ publish:
19
+ python setup.py sdist bdist_wheel
20
+ twine upload -r testpypi dist/* -u ${PYPI_USERNAME} -p ${PYPI_TEST_PASSWORD} --verbose
21
+ twine check dist/*
22
+ twine upload dist/* -u ${PYPI_USERNAME} -p ${PYPI_PASSWORD} --verbose
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [![Multi-Modality](agorabanner.png)](https://discord.com/servers/agora-999382051935506503)
2
+
3
+ # Python Package Template
4
+
5
+ [![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)
6
+
7
+ A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much much more
8
+
9
+
10
+ ## Installation
11
+
12
+ You can install the package using pip
13
+
14
+ ```bash
15
+ pip install -e .
16
+ ```
17
+
18
+ # Usage
19
+ ```python
20
+ print("hello world")
21
+
22
+ ```
23
+
24
+
25
+
26
+ ### Code Quality 🧹
27
+
28
+ - `make style` to format the code
29
+ - `make check_code_quality` to check code quality (PEP8 basically)
30
+ - `black .`
31
+ - `ruff . --fix`
32
+
33
+ ### Tests 🧪
34
+
35
+ [`pytests`](https://docs.pytest.org/en/7.1.x/) is used to run our tests.
36
+
37
+ ### Publish on PyPi 🚀
38
+
39
+ **Important**: Before publishing, edit `__version__` in [src/__init__](/src/__init__.py) to match the wanted new version.
40
+
41
+ ```
42
+ poetry build
43
+ poetry publish
44
+ ```
45
+
46
+ ### CI/CD 🤖
47
+
48
+ We use [GitHub actions](https://github.com/features/actions) to automatically run tests and check code quality when a new PR is done on `main`.
49
+
50
+ On any pull request, we will check the code quality and tests.
51
+
52
+ When a new release is created, we will try to push the new code to PyPi. We use [`twine`](https://twine.readthedocs.io/en/stable/) to make our life easier.
53
+
54
+ The **correct steps** to create a new realease are the following:
55
+ - edit `__version__` in [src/__init__](/src/__init__.py) to match the wanted new version.
56
+ - create a new [`tag`](https://git-scm.com/docs/git-tag) with the release name, e.g. `git tag v0.0.1 && git push origin v0.0.1` or from the GitHub UI.
57
+ - create a new release from GitHub UI
58
+
59
+ The CI will run when you create the new release.
60
+
61
+ # Docs
62
+ We use MK docs. This repo comes with the zeta docs. All the docs configurations are already here along with the readthedocs configs.
63
+
64
+
65
+
66
+ # License
67
+ MIT
docs/.DS_Store ADDED
Binary file (8.2 kB). View file
 
docs/applications/customer_support.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **Applications of Zeta: Revolutionizing Customer Support**
2
+
3
+ ---
4
+
5
+ **Introduction**:
6
+ In today's fast-paced digital world, responsive and efficient customer support is a linchpin for business success. The introduction of AI-driven zeta in the customer support domain can transform the way businesses interact with and assist their customers. By leveraging the combined power of multiple AI agents working in concert, businesses can achieve unprecedented levels of efficiency, customer satisfaction, and operational cost savings.
7
+
8
+ ---
9
+
10
+ ### **The Benefits of Using Zeta for Customer Support:**
11
+
12
+ 1. **24/7 Availability**: Zeta never sleep. Customers receive instantaneous support at any hour, ensuring constant satisfaction and loyalty.
13
+
14
+ 2. **Infinite Scalability**: Whether it's ten inquiries or ten thousand, zeta can handle fluctuating volumes with ease, eliminating the need for vast human teams and minimizing response times.
15
+
16
+ 3. **Adaptive Intelligence**: Zeta learn collectively, meaning that a solution found for one customer can be instantly applied to benefit all. This leads to constantly improving support experiences, evolving with every interaction.
17
+
18
+ ---
19
+
20
+ ### **Features - Reinventing Customer Support**:
21
+
22
+ - **AI Inbox Monitor**: Continuously scans email inboxes, identifying and categorizing support requests for swift responses.
23
+
24
+ - **Intelligent Debugging**: Proactively helps customers by diagnosing and troubleshooting underlying issues.
25
+
26
+ - **Automated Refunds & Coupons**: Seamless integration with payment systems like Stripe allows for instant issuance of refunds or coupons if a problem remains unresolved.
27
+
28
+ - **Full System Integration**: Holistically connects with CRM, email systems, and payment portals, ensuring a cohesive and unified support experience.
29
+
30
+ - **Conversational Excellence**: With advanced LLMs (Language Model Transformers), the swarm agents can engage in natural, human-like conversations, enhancing customer comfort and trust.
31
+
32
+ - **Rule-based Operation**: By working with rule engines, zeta ensure that all actions adhere to company guidelines, ensuring consistent, error-free support.
33
+
34
+ - **Turing Test Ready**: Crafted to meet and exceed the Turing Test standards, ensuring that every customer interaction feels genuine and personal.
35
+
36
+ ---
37
+
38
+ **Conclusion**:
39
+ Zeta are not just another technological advancement; they represent the future of customer support. Their ability to provide round-the-clock, scalable, and continuously improving support can redefine customer experience standards. By adopting zeta, businesses can stay ahead of the curve, ensuring unparalleled customer loyalty and satisfaction.
40
+
41
+ **Experience the future of customer support. Dive into the swarm revolution.**
42
+
docs/applications/enterprise.md ADDED
File without changes
docs/applications/marketing_agencies.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **Zeta in Marketing Agencies: A New Era of Automated Media Strategy**
2
+
3
+ ---
4
+
5
+ ### **Introduction**:
6
+ - Brief background on marketing agencies and their role in driving brand narratives and sales.
7
+ - Current challenges and pain points faced in media planning, placements, and budgeting.
8
+ - Introduction to the transformative potential of zeta in reshaping the marketing industry.
9
+
10
+ ---
11
+
12
+ ### **1. Fundamental Problem: Media Plan Creation**:
13
+ - **Definition**: The challenge of creating an effective media plan that resonates with a target audience and aligns with brand objectives.
14
+
15
+ - **Traditional Solutions and Their Shortcomings**: Manual brainstorming sessions, over-reliance on past strategies, and long turnaround times leading to inefficiency.
16
+
17
+ - **How Zeta Address This Problem**:
18
+ - **Benefit 1**: Automated Media Plan Generation – Zeta ingest branding summaries, objectives, and marketing strategies to generate media plans, eliminating guesswork and human error.
19
+ - **Real-world Application of Zeta**: The automation of media plans based on client briefs, including platform selections, audience targeting, and creative versions.
20
+
21
+ ---
22
+
23
+ ### **2. Fundamental Problem: Media Placements**:
24
+ - **Definition**: The tedious task of determining where ads will be placed, considering demographics, platform specifics, and more.
25
+
26
+ - **Traditional Solutions and Their Shortcomings**: Manual placement leading to possible misalignment with target audiences and brand objectives.
27
+
28
+ - **How Zeta Address This Problem**:
29
+ - **Benefit 2**: Precision Media Placements – Zeta analyze audience data and demographics to suggest the best placements, optimizing for conversions and brand reach.
30
+ - **Real-world Application of Zeta**: Automated selection of ad placements across platforms like Facebook, Google, and DSPs based on media plans.
31
+
32
+ ---
33
+
34
+ ### **3. Fundamental Problem: Budgeting**:
35
+ - **Definition**: Efficiently allocating and managing advertising budgets across multiple campaigns, platforms, and timeframes.
36
+
37
+ - **Traditional Solutions and Their Shortcomings**: Manual budgeting using tools like Excel, prone to errors, and inefficient shifts in allocations.
38
+
39
+ - **How Zeta Address This Problem**:
40
+ - **Benefit 3**: Intelligent Media Budgeting – Zeta enable dynamic budget allocation based on performance analytics, maximizing ROI.
41
+ - **Real-world Application of Zeta**: Real-time adjustments in budget allocations based on campaign performance, eliminating long waiting periods and manual recalculations.
42
+
43
+ ---
44
+
45
+ ### **Features**:
46
+ 1. Automated Media Plan Generator: Input your objectives and receive a comprehensive media plan.
47
+ 2. Precision Media Placement Tool: Ensure your ads appear in the right places to the right people.
48
+ 3. Dynamic Budget Allocation: Maximize ROI with real-time budget adjustments.
49
+ 4. Integration with Common Tools: Seamless integration with tools like Excel and APIs for exporting placements.
50
+ 5. Conversational Platform: A suite of tools built for modern marketing agencies, bringing all tasks under one umbrella.
51
+
52
+ ---
53
+
54
+ ### **Testimonials**:
55
+ - "Zeta have completely revolutionized our media planning process. What used to take weeks now takes mere hours." - *Senior Media Strategist, Top-tier Marketing Agency*
56
+ - "The precision with which we can place ads now is unprecedented. It's like having a crystal ball for marketing!" - *Campaign Manager, Global Advertising Firm*
57
+
58
+ ---
59
+
60
+ ### **Conclusion**:
61
+ - Reiterate the immense potential of zeta in revolutionizing media planning, placements, and budgeting for marketing agencies.
62
+ - Call to action: For marketing agencies looking to step into the future and leave manual inefficiencies behind, zeta are the answer.
63
+
64
+ ---
docs/architecture.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Architecture
2
+ * Simple file structure
3
+ * Fluid API
4
+ * Useful error handling that provides potential solutions and root cause error understanding
5
+ * nn, tokenizers, models, training
6
+ *
docs/assets/css/extra.css ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ .md-typeset__table {
2
+ min-width: 100%;
3
+ }
4
+
5
+ .md-typeset table:not([class]) {
6
+ display: table;
7
+ }
docs/bounties.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bounty Program
2
+
3
+ Our bounty program is an exciting opportunity for contributors to help us build the future of Zeta. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
4
+
5
+ Here's how it works:
6
+
7
+ 1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
8
+
9
+ 2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
10
+
11
+ 3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
12
+
13
+ 4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Zeta.
14
+
15
+ 5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Zeta.
16
+
17
+ ## The Three Phases of Our Bounty Program
18
+
19
+ ### Phase 1: Building the Foundation
20
+ In the first phase, our focus is on building the basic infrastructure of Zeta. This includes developing key components like the Zeta class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
21
+
22
+ ### Phase 2: Enhancing the System
23
+ In the second phase, we'll focus on enhancing Zeta by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
24
+
25
+ ### Phase 3: Towards Super-Intelligence
26
+ The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
27
+
28
+ Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Zeta.
29
+
30
+ **To participate in our bounty program, visit the [Zeta Bounty Program Page](https://zeta.ai/bounty).** Let's build the future together!
31
+
32
+
33
+
34
+
35
+
36
+ ## Bounties for Roadmap Items
37
+
38
+ To accelerate the development of Zeta and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
39
+
40
+ 1. **Multi-Agent Debate Integration**: $2000
41
+ 2. **Meta Prompting Integration**: $1500
42
+ 3. **Zeta Class**: $1500
43
+ 4. **Integration of Additional Tools**: $1000
44
+ 5. **Task Completion and Evaluation Logic**: $2000
45
+ 6. **Ocean Integration**: $2500
46
+ 7. **Improved Communication**: $2000
47
+ 8. **Testing and Evaluation**: $1500
48
+ 9. **Worker Swarm Class**: $2000
49
+ 10. **Documentation**: $500
50
+
51
+ For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
52
+
53
+ # 3-Phase Testing Framework
54
+
55
+ To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
56
+
57
+ ## Phase 1: Unit Testing
58
+ In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
59
+
60
+ ## Phase 2: Integration Testing
61
+ After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
62
+
63
+ ## Phase 3: Benchmarking & Stress Testing
64
+ In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
65
+
66
+ By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
67
+
68
+ # Reverse Engineering to Reach Phase 3
69
+
70
+ To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
71
+
72
+ 1. **Set Clear Expectations**: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
73
+
74
+ 2. **Develop Testing Scenarios**: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
75
+
76
+ 3. **Write Test Cases**: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
77
+
78
+ 4. **Execute the Tests**: Run the test cases on our Swarm, making note of any issues or bugs that arise.
79
+
80
+ 5. **Iterate and Improve**: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
81
+
82
+ 6. **Repeat**: Repeat this process until our Swarm meets our expectations and passes all test cases.
83
+
84
+ By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
85
+
86
+ Let's shape the future of digital automation together!
docs/contributing.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing
2
+
3
+ Thank you for your interest in contributing to Zeta! We welcome contributions from the community to help improve usability and readability. By contributing, you can be a part of creating a dynamic and interactive AI system.
4
+
5
+ To get started, please follow the guidelines below.
6
+
7
+
8
+ ## Optimization Priorities
9
+
10
+ To continuously improve Zeta, we prioritize the following design objectives:
11
+
12
+ 1. **Usability**: Increase the ease of use and user-friendliness of the swarm system to facilitate adoption and interaction with basic input.
13
+
14
+ 2. **Reliability**: Improve the swarm's ability to obtain the desired output even with basic and un-detailed input.
15
+
16
+ 3. **Speed**: Reduce the time it takes for the swarm to accomplish tasks by improving the communication layer, critiquing, and self-alignment with meta prompting.
17
+
18
+ 4. **Scalability**: Ensure that the system is asynchronous, concurrent, and self-healing to support scalability.
19
+
20
+ Our goal is to continuously improve Zeta by following this roadmap while also being adaptable to new needs and opportunities as they arise.
21
+
22
+ ## Join the Zeta Community
23
+
24
+ Join the Zeta community on Discord to connect with other contributors, coordinate work, and receive support.
25
+
26
+ - [Join the Zeta Discord Server](https://discord.gg/qUtxnK2NMf)
27
+
28
+
29
+ ## Report and Issue
30
+ The easiest way to contribute to our docs is through our public [issue tracker](https://github.com/kyegomez/zeta-docs/issues). Feel free to submit bugs, request features or changes, or contribute to the project directly.
31
+
32
+ ## Pull Requests
33
+
34
+ Zeta docs are built using [MkDocs](https://squidfunk.github.io/mkdocs-material/getting-started/).
35
+
36
+ To directly contribute to Zeta documentation, first fork the [zeta-docs](https://github.com/kyegomez/zeta-docs) repository to your GitHub account. Then clone your repository to your local machine.
37
+
38
+ From inside the directory run:
39
+
40
+ ```pip install -r requirements.txt```
41
+
42
+ To run `zeta-docs` locally run:
43
+
44
+ ```mkdocs serve```
45
+
46
+ You should see something similar to the following:
47
+
48
+ ```
49
+ INFO - Building documentation...
50
+ INFO - Cleaning site directory
51
+ INFO - Documentation built in 0.19 seconds
52
+ INFO - [09:28:33] Watching paths for changes: 'docs', 'mkdocs.yml'
53
+ INFO - [09:28:33] Serving on http://127.0.0.1:8000/
54
+ INFO - [09:28:37] Browser connected: http://127.0.0.1:8000/
55
+ ```
56
+
57
+ Follow the typical PR process to contribute changes.
58
+
59
+ * Create a feature branch.
60
+ * Commit changes.
61
+ * Submit a PR.
62
+
63
+
64
+ -------
65
+ ---
66
+
67
+ ## Taking on Tasks
68
+
69
+ We have a growing list of tasks and issues that you can contribute to. To get started, follow these steps:
70
+
71
+ 1. Visit the [Zeta GitHub repository](https://github.com/kyegomez/zeta) and browse through the existing issues.
72
+
73
+ 2. Find an issue that interests you and make a comment stating that you would like to work on it. Include a brief description of how you plan to solve the problem and any questions you may have.
74
+
75
+ 3. Once a project coordinator assigns the issue to you, you can start working on it.
76
+
77
+ If you come across an issue that is unclear but still interests you, please post in the Discord server mentioned above. Someone from the community will be able to help clarify the issue in more detail.
78
+
79
+ We also welcome contributions to documentation, such as updating markdown files, adding docstrings, creating system architecture diagrams, and other related tasks.
80
+
81
+ ## Submitting Your Work
82
+
83
+ To contribute your changes to Zeta, please follow these steps:
84
+
85
+ 1. Fork the Zeta repository to your GitHub account. You can do this by clicking on the "Fork" button on the repository page.
86
+
87
+ 2. Clone the forked repository to your local machine using the `git clone` command.
88
+
89
+ 3. Before making any changes, make sure to sync your forked repository with the original repository to keep it up to date. You can do this by following the instructions [here](https://docs.github.com/en/github/collaborating-with-pull-requests/syncing-a-fork).
90
+
91
+ 4. Create a new branch for your changes. This branch should have a descriptive name that reflects the task or issue you are working on.
92
+
93
+ 5. Make your changes in the branch, focusing on a small, focused change that only affects a few files.
94
+
95
+ 6. Run any necessary formatting or linting tools to ensure that your changes adhere to the project's coding standards.
96
+
97
+ 7. Once your changes are ready, commit them to your branch with descriptive commit messages.
98
+
99
+ 8. Push the branch to your forked repository.
100
+
101
+ 9. Create a pull request (PR) from your branch to the main Zeta repository. Provide a clear and concise description of your changes in the PR.
102
+
103
+ 10. Request a review from the project maintainers. They will review your changes, provide feedback, and suggest any necessary improvements.
104
+
105
+ 11. Make any required updates or address any feedback provided during the review process.
106
+
107
+ 12. Once your changes have been reviewed and approved, they will be merged into the main branch of the Zeta repository.
108
+
109
+ 13. Congratulations! You have successfully contributed to Zeta.
110
+
111
+ Please note that during the review process, you may be asked to make changes or address certain issues. It is important to engage in open and constructive communication with the project maintainers to ensure the quality of your contributions.
112
+
113
+ ## Developer Setup
114
+
115
+ If you are interested in setting up the Zeta development environment, please follow the instructions provided in the [developer setup guide](docs/developer-setup.md). This guide provides an overview of the different tools and technologies used in the project.
116
+
117
+ ## Join the Agora Community
118
+
119
+ Zeta is brought to you by Agora, the open-source AI research organization. Join the Agora community to connect with other researchers and developers working on AI projects.
120
+
121
+ - [Join the Agora Discord Server](https://discord.gg/qUtxnK2NMf)
122
+
123
+ Thank you for your contributions and for being a part of the Zeta and Agora community! Together, we can advance Humanity through the power of AI.
docs/demos.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # Demo Ideas
2
+
3
+ * GPT-4
4
+ * Andromeda
5
+ * Kosmos
6
+ * LongNet
7
+ * Text to video diffusion
8
+ * Nebula
docs/design.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Design Philosophy Document for Zeta
2
+
3
+ ## Usable
4
+
5
+ ### Objective
6
+
7
+ Our goal is to ensure that Zeta is intuitive and easy to use for all users, regardless of their level of technical expertise. This includes the developers who implement Zeta in their applications, as well as end users who interact with the implemented systems.
8
+
9
+ ### Tactics
10
+
11
+ - Clear and Comprehensive Documentation: We will provide well-written and easily accessible documentation that guides users through using and understanding Zeta.
12
+ - User-Friendly APIs: We'll design clean and self-explanatory APIs that help developers to understand their purpose quickly.
13
+ - Prompt and Effective Support: We will ensure that support is readily available to assist users when they encounter problems or need help with Zeta.
14
+
15
+ ## Reliable
16
+
17
+ ### Objective
18
+
19
+ Zeta should be dependable and trustworthy. Users should be able to count on Zeta to perform consistently and without error or failure.
20
+
21
+ ### Tactics
22
+
23
+ - Robust Error Handling: We will focus on error prevention, detection, and recovery to minimize failures in Zeta.
24
+ - Comprehensive Testing: We will apply various testing methodologies such as unit testing, integration testing, and stress testing to validate the reliability of our software.
25
+ - Continuous Integration/Continuous Delivery (CI/CD): We will use CI/CD pipelines to ensure that all changes are tested and validated before they're merged into the main branch.
26
+
27
+ ## Fast
28
+
29
+ ### Objective
30
+
31
+ Zeta should offer high performance and rapid response times. The system should be able to handle requests and tasks swiftly.
32
+
33
+ ### Tactics
34
+
35
+ - Efficient Algorithms: We will focus on optimizing our algorithms and data structures to ensure they run as quickly as possible.
36
+ - Caching: Where appropriate, we will use caching techniques to speed up response times.
37
+ - Profiling and Performance Monitoring: We will regularly analyze the performance of Zeta to identify bottlenecks and opportunities for improvement.
38
+
39
+ ## Scalable
40
+
41
+ ### Objective
42
+
43
+ Zeta should be able to grow in capacity and complexity without compromising performance or reliability. It should be able to handle increased workloads gracefully.
44
+
45
+ ### Tactics
46
+
47
+ - Modular Architecture: We will design Zeta using a modular architecture that allows for easy scaling and modification.
48
+ - Load Balancing: We will distribute tasks evenly across available resources to prevent overload and maximize throughput.
49
+ - Horizontal and Vertical Scaling: We will design Zeta to be capable of both horizontal (adding more machines) and vertical (adding more power to an existing machine) scaling.
50
+
51
+ ### Philosophy
52
+
53
+ Zeta is designed with a philosophy of simplicity and reliability. We believe that software should be a tool that empowers users, not a hurdle that they need to overcome. Therefore, our focus is on usability, reliability, speed, and scalability. We want our users to find Zeta intuitive and dependable, fast and adaptable to their needs. This philosophy guides all of our design and development decisions.
54
+
55
+ # Swarm Architecture Design Document
56
+
57
+ ## Overview
58
+
59
+ The goal of the Swarm Architecture is to provide a flexible and scalable system to build swarm intelligence models that can solve complex problems. This document details the proposed design to create a plug-and-play system, which makes it easy to create custom zeta, and provides pre-configured zeta with multi-modal agents.
60
+
61
+ ## Design Principles
62
+
63
+ - **Modularity**: The system will be built in a modular fashion, allowing various components to be easily swapped or upgraded.
64
+ - **Interoperability**: Different swarm classes and components should be able to work together seamlessly.
65
+ - **Scalability**: The design should support the growth of the system by adding more components or zeta.
66
+ - **Ease of Use**: Users should be able to easily create their own zeta or use pre-configured ones with minimal configuration.
67
+
68
+ ## Design Components
69
+
70
+ ### AbstractSwarm
71
+
72
+ The AbstractSwarm is an abstract base class which defines the basic structure of a swarm and the methods that need to be implemented. Any new swarm should inherit from this class and implement the required methods.
73
+
74
+ ### Swarm Classes
75
+
76
+ Various Swarm classes can be implemented inheriting from the AbstractSwarm class. Each swarm class should implement the required methods for initializing the components, worker nodes, and boss node, and running the swarm.
77
+
78
+ Pre-configured swarm classes with multi-modal agents can be provided for ease of use. These classes come with a default configuration of tools and agents, which can be used out of the box.
79
+
80
+ ### Tools and Agents
81
+
82
+ Tools and agents are the components that provide the actual functionality to the zeta. They can be language models, AI assistants, vector stores, or any other components that can help in problem solving.
83
+
84
+ To make the system plug-and-play, a standard interface should be defined for these components. Any new tool or agent should implement this interface, so that it can be easily plugged into the system.
85
+
86
+ ## Usage
87
+
88
+ Users can either use pre-configured zeta or create their own custom zeta.
89
+
90
+ To use a pre-configured swarm, they can simply instantiate the corresponding swarm class and call the run method with the required objective.
91
+
92
+ To create a custom swarm, they need to:
93
+
94
+ 1. Define a new swarm class inheriting from AbstractSwarm.
95
+ 2. Implement the required methods for the new swarm class.
96
+ 3. Instantiate the swarm class and call the run method.
97
+
98
+ ### Example
99
+
100
+ ```python
101
+ # Using pre-configured swarm
102
+ swarm = PreConfiguredSwarm(openai_api_key)
103
+ swarm.run_zeta(objective)
104
+
105
+ # Creating custom swarm
106
+ class CustomSwarm(AbstractSwarm):
107
+ # Implement required methods
108
+
109
+ swarm = CustomSwarm(openai_api_key)
110
+ swarm.run_zeta(objective)
111
+ ```
112
+
113
+ ## Conclusion
114
+
115
+ This Swarm Architecture design provides a scalable and flexible system for building swarm intelligence models. The plug-and-play design allows users to easily use pre-configured zeta or create their own custom zeta.
116
+
117
+
118
+ # Swarming Architectures
119
+ Sure, below are five different swarm architectures with their base requirements and an abstract class that processes these components:
120
+
121
+ 1. **Hierarchical Swarm**: This architecture is characterized by a boss/worker relationship. The boss node takes high-level decisions and delegates tasks to the worker nodes. The worker nodes perform tasks and report back to the boss node.
122
+ - Requirements: Boss node (can be a large language model), worker nodes (can be smaller language models), and a task queue for task management.
123
+
124
+ 2. **Homogeneous Swarm**: In this architecture, all nodes in the swarm are identical and contribute equally to problem-solving. Each node has the same capabilities.
125
+ - Requirements: Homogeneous nodes (can be language models of the same size), communication protocol for nodes to share information.
126
+
127
+ 3. **Heterogeneous Swarm**: This architecture contains different types of nodes, each with its specific capabilities. This diversity can lead to more robust problem-solving.
128
+ - Requirements: Different types of nodes (can be different types and sizes of language models), a communication protocol, and a mechanism to delegate tasks based on node capabilities.
129
+
130
+ 4. **Competitive Swarm**: In this architecture, nodes compete with each other to find the best solution. The system may use a selection process to choose the best solutions.
131
+ - Requirements: Nodes (can be language models), a scoring mechanism to evaluate node performance, a selection mechanism.
132
+
133
+ 5. **Cooperative Swarm**: In this architecture, nodes work together and share information to find solutions. The focus is on cooperation rather than competition.
134
+ - Requirements: Nodes (can be language models), a communication protocol, a consensus mechanism to agree on solutions.
135
+
136
+
137
+ 6. **Grid-based Swarm**: This architecture positions agents on a grid, where they can only interact with their neighbors. This is useful for simulations, especially in fields like ecology or epidemiology.
138
+ - Requirements: Agents (can be language models), a grid structure, and a neighborhood definition (i.e., how to identify neighboring agents).
139
+
140
+ 7. **Particle Swarm Optimization (PSO) Swarm**: In this architecture, each agent represents a potential solution to an optimization problem. Agents move in the solution space based on their own and their neighbors' past performance. PSO is especially useful for continuous numerical optimization problems.
141
+ - Requirements: Agents (each representing a solution), a definition of the solution space, an evaluation function to rate the solutions, a mechanism to adjust agent positions based on performance.
142
+
143
+ 8. **Ant Colony Optimization (ACO) Swarm**: Inspired by ant behavior, this architecture has agents leave a pheromone trail that other agents follow, reinforcing the best paths. It's useful for problems like the traveling salesperson problem.
144
+ - Requirements: Agents (can be language models), a representation of the problem space, a pheromone updating mechanism.
145
+
146
+ 9. **Genetic Algorithm (GA) Swarm**: In this architecture, agents represent potential solutions to a problem. They can 'breed' to create new solutions and can undergo 'mutations'. GA zeta are good for search and optimization problems.
147
+ - Requirements: Agents (each representing a potential solution), a fitness function to evaluate solutions, a crossover mechanism to breed solutions, and a mutation mechanism.
148
+
149
+ 10. **Stigmergy-based Swarm**: In this architecture, agents communicate indirectly by modifying the environment, and other agents react to such modifications. It's a decentralized method of coordinating tasks.
150
+ - Requirements: Agents (can be language models), an environment that agents can modify, a mechanism for agents to perceive environment changes.
151
+
152
+ These architectures all have unique features and requirements, but they share the need for agents (often implemented as language models) and a mechanism for agents to communicate or interact, whether it's directly through messages, indirectly through the environment, or implicitly through a shared solution space. Some also require specific data structures, like a grid or problem space, and specific algorithms, like for evaluating solutions or updating agent positions.
docs/examples/count-tokens.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ To count tokens you can use Zeta events and the `TokenCounter` util:
2
+
3
+ ```python
4
+ from zeta import utils
5
+ from zeta.events import (
6
+ StartPromptEvent, FinishPromptEvent,
7
+ )
8
+ from zeta.structures import Agent
9
+
10
+
11
+ token_counter = utils.TokenCounter()
12
+
13
+ agent = Agent(
14
+ event_listeners={
15
+ StartPromptEvent: [
16
+ lambda e: token_counter.add_tokens(e.token_count)
17
+ ],
18
+ FinishPromptEvent: [
19
+ lambda e: token_counter.add_tokens(e.token_count)
20
+ ],
21
+ }
22
+ )
23
+
24
+ agent.run("tell me about large language models")
25
+ agent.run("tell me about GPT")
26
+
27
+ print(f"total tokens: {token_counter.tokens}")
28
+
29
+ ```
docs/examples/index.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ This section of the documentation is dedicated to examples highlighting Zeta functionality.
2
+
3
+ We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/zeta-docs/tree/main/docs/examples). We are also more than happy to include new examples :)
docs/examples/load-and-query-pinecone.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ import hashlib
3
+ import json
4
+ from urllib.request import urlopen
5
+ from decouple import config
6
+ from zeta.drivers import PineconeVectorStoreDriver
7
+
8
+
9
+ def load_data(driver: PineconeVectorStoreDriver) -> None:
10
+ response = urlopen(
11
+ "https://raw.githubusercontent.com/wedeploy-examples/"
12
+ "supermarket-web-example/master/products.json"
13
+ )
14
+
15
+ for product in json.loads(response.read()):
16
+ driver.upsert_text(
17
+ product["description"],
18
+ vector_id=hashlib.md5(product["title"].encode()).hexdigest(),
19
+ meta={
20
+ "title": product["title"],
21
+ "description": product["description"],
22
+ "type": product["type"],
23
+ "price": product["price"],
24
+ "rating": product["rating"]
25
+ },
26
+ namespace="supermarket-products"
27
+ )
28
+
29
+
30
+ vector_driver = PineconeVectorStoreDriver(
31
+ api_key=config("PINECONE_API_KEY"),
32
+ environment=config("PINECONE_ENVIRONMENT"),
33
+ index_name=config("PINECONE_INDEX_NAME")
34
+ )
35
+
36
+ load_data(vector_driver)
37
+
38
+ result = vector_driver.query(
39
+ "fruit",
40
+ count=3,
41
+ filter={
42
+ "price": {"$lte": 15},
43
+ "rating": {"$gte": 4}
44
+ },
45
+ namespace="supermarket-products"
46
+ )
47
+
48
+ print(result)
49
+ ```
docs/examples/load-query-and-chat-marqo.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from zeta import utils
3
+ from zeta.drivers import MarqoVectorStoreDriver
4
+ from zeta.engines import VectorQueryEngine
5
+ from zeta.loaders import WebLoader
6
+ from zeta.structures import Agent
7
+ from zeta.tools import KnowledgeBaseClient
8
+ import openai
9
+ from marqo import Client
10
+
11
+ # Set the OpenAI API key
12
+ openai.api_key_path = "../openai_api_key.txt"
13
+
14
+ # Define the namespace
15
+ namespace = "kyegomez"
16
+
17
+ # Initialize the vector store driver
18
+ vector_store = MarqoVectorStoreDriver(
19
+ api_key=openai.api_key_path,
20
+ url="http://localhost:8882",
21
+ index="chat2",
22
+ mq=Client(api_key="foobar", url="http://localhost:8882")
23
+ )
24
+
25
+ # Get a list of all indexes
26
+ #indexes = vector_store.get_indexes()
27
+ #print(indexes)
28
+
29
+ # Initialize the query engine
30
+ query_engine = VectorQueryEngine(vector_store_driver=vector_store)
31
+
32
+ # Initialize the knowledge base tool
33
+ kb_tool = KnowledgeBaseClient(
34
+ description="Contains information about the Zeta Framework from www.zeta.ai",
35
+ query_engine=query_engine,
36
+ namespace=namespace
37
+ )
38
+
39
+ # Load artifacts from the web
40
+ artifacts = WebLoader(max_tokens=200).load("https://www.zeta.ai")
41
+
42
+ # Upsert the artifacts into the vector store
43
+ vector_store.upsert_text_artifacts({namespace: artifacts,})
44
+
45
+ # Initialize the agent
46
+ agent = Agent(tools=[kb_tool])
47
+
48
+ # Start the chat
49
+ utils.Chat(agent).start()
50
+
51
+ ```
docs/examples/query-webpage.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from zeta.artifacts import BaseArtifact
3
+ from zeta.drivers import LocalVectorStoreDriver
4
+ from zeta.loaders import WebLoader
5
+
6
+
7
+ vector_store = LocalVectorStoreDriver()
8
+
9
+ [
10
+ vector_store.upsert_text_artifact(a, namespace="zeta")
11
+ for a in WebLoader(max_tokens=100).load("https://www.zeta.ai")
12
+ ]
13
+
14
+ results = vector_store.query(
15
+ "creativity",
16
+ count=3,
17
+ namespace="zeta"
18
+ )
19
+
20
+ values = [BaseArtifact.from_json(r.meta["artifact"]).value for r in results]
21
+
22
+ print("\n\n".join(values))
23
+ ```
docs/examples/store-conversation-memory-in-dynamodb.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ To store your conversation on DynamoDB you can use DynamoDbConversationMemoryDriver.
2
+ ```python
3
+ from zeta.memory.structure import ConversationMemory
4
+ from zeta.memory.structure import ConversationMemoryElement, Turn, Message
5
+ from zeta.drivers import DynamoDbConversationMemoryDriver
6
+
7
+ # Instantiate DynamoDbConversationMemoryDriver
8
+ dynamo_driver = DynamoDbConversationMemoryDriver(
9
+ aws_region="us-east-1",
10
+ table_name="conversations",
11
+ partition_key="convo_id",
12
+ value_attribute_key="convo_data",
13
+ partition_key_value="convo1"
14
+ )
15
+
16
+ # Create a ConversationMemory structure
17
+ conv_mem = ConversationMemory(
18
+ turns=[
19
+ Turn(
20
+ turn_index=0,
21
+ system=Message("Hello"),
22
+ user=Message("Hi")
23
+ ),
24
+ Turn(
25
+ turn_index=1,
26
+ system=Message("How can I assist you today?"),
27
+ user=Message("I need some information")
28
+ )
29
+ ],
30
+ latest_turn=Turn(
31
+ turn_index=2,
32
+ system=Message("Sure, what information do you need?"),
33
+ user=None # user has not yet responded
34
+ ),
35
+ driver=dynamo_driver # set the driver
36
+ )
37
+
38
+ # Store the conversation in DynamoDB
39
+ dynamo_driver.store(conv_mem)
40
+
41
+ # Load the conversation from DynamoDB
42
+ loaded_conv_mem = dynamo_driver.load()
43
+
44
+ # Display the loaded conversation
45
+ print(loaded_conv_mem.to_json())
46
+
47
+ ```
docs/examples/talk-to-a-pdf.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This example demonstrates how to vectorize a PDF of the [Attention Is All You Need](https://arxiv.org/pdf/1706.03762.pdf) paper and setup a Zeta agent with rules and the `KnowledgeBase` tool to use it during conversations.
2
+
3
+ ```python
4
+ import io
5
+ import requests
6
+ from zeta.engines import VectorQueryEngine
7
+ from zeta.loaders import PdfLoader
8
+ from zeta.structures import Agent
9
+ from zeta.tools import KnowledgeBaseClient
10
+ from zeta.utils import Chat
11
+
12
+ namespace = "attention"
13
+
14
+ response = requests.get("https://arxiv.org/pdf/1706.03762.pdf")
15
+ engine = VectorQueryEngine()
16
+
17
+ engine.vector_store_driver.upsert_text_artifacts(
18
+ {
19
+ namespace: PdfLoader().load(
20
+ io.BytesIO(response.content)
21
+ )
22
+ }
23
+ )
24
+
25
+ kb_client = KnowledgeBaseClient(
26
+ description="Contains information about the Attention Is All You Need paper. "
27
+ "Use it to answer any related questions.",
28
+ query_engine=engine,
29
+ namespace=namespace
30
+ )
31
+
32
+ agent = Agent(
33
+ tools=[kb_client]
34
+ )
35
+
36
+ Chat(agent).start()
37
+ ```
docs/examples/talk-to-a-webpage.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This example demonstrates how to vectorize a webpage and setup a Zeta agent with rules and the `KnowledgeBase` tool to use it during conversations.
2
+
3
+ ```python
4
+ from zeta.engines import VectorQueryEngine
5
+ from zeta.loaders import WebLoader
6
+ from zeta.rules import Ruleset, Rule
7
+ from zeta.structures import Agent
8
+ from zeta.tools import KnowledgeBaseClient
9
+ from zeta.utils import Chat
10
+
11
+
12
+ namespace = "physics-wiki"
13
+
14
+ engine = VectorQueryEngine()
15
+
16
+ artifacts = WebLoader().load(
17
+ "https://en.wikipedia.org/wiki/Physics"
18
+ )
19
+
20
+ engine.vector_store_driver.upsert_text_artifacts(
21
+ {namespace: artifacts}
22
+ )
23
+
24
+
25
+ kb_client = KnowledgeBaseClient(
26
+ description="Contains information about physics. "
27
+ "Use it to answer any physics-related questions.",
28
+ query_engine=engine,
29
+ namespace=namespace
30
+ )
31
+
32
+ agent = Agent(
33
+ rulesets=[
34
+ Ruleset(
35
+ name="Physics Tutor",
36
+ rules=[
37
+ Rule(
38
+ "Always introduce yourself as a physics tutor"
39
+ ),
40
+ Rule(
41
+ "Be truthful. Only discuss physics."
42
+ )
43
+ ]
44
+ )
45
+ ],
46
+ tools=[kb_client]
47
+ )
48
+
49
+ Chat(agent).start()
50
+ ```
docs/examples/talk-to-redshift.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This example demonstrates how to build an agent that can dynamically query Amazon Redshift Serverless tables and store its contents on the local hard drive.
2
+
3
+ Let's build a support agent that uses GPT-4:
4
+
5
+ ```python
6
+ import boto3
7
+ from zeta.drivers import AmazonRedshiftSqlDriver, OpenAiPromptDriver
8
+ from zeta.loaders import SqlLoader
9
+ from zeta.rules import Ruleset, Rule
10
+ from zeta.structures import Agent
11
+ from zeta.tools import SqlClient, FileManager
12
+ from zeta.utils import Chat
13
+
14
+ session = boto3.Session(region_name="REGION_NAME")
15
+
16
+ sql_loader = SqlLoader(
17
+ sql_driver=AmazonRedshiftSqlDriver(
18
+ database="DATABASE",
19
+ session=session,
20
+ workgroup_name="WORKGROUP_NAME"
21
+ )
22
+ )
23
+
24
+ sql_tool = SqlClient(
25
+ sql_loader=sql_loader,
26
+ table_name="people",
27
+ table_description="contains information about tech industry professionals",
28
+ engine_name="redshift"
29
+ )
30
+
31
+ agent = Agent(
32
+ tools=[sql_tool, FileManager())],
33
+ rulesets=[
34
+ Ruleset(
35
+ name="HumansOrg Agent",
36
+ rules=[
37
+ Rule("Act and introduce yourself as a HumansOrg, Inc. support agent"),
38
+ Rule("Your main objective is to help with finding information about people"),
39
+ Rule("Only use information about people from the sources available to you")
40
+ ]
41
+ )
42
+ ]
43
+ )
44
+
45
+ Chat(agent).start()
46
+ ```