VYPR
Medium severity5.5NVD Advisory· Published Jun 19, 2025· Updated Apr 29, 2026

CVE-2025-6278

CVE-2025-6278

Description

A vulnerability classified as critical was found in Upsonic up to 0.55.6. This vulnerability affects the function os.path.join of the file markdown/server.py. The manipulation of the argument file.filename leads to path traversal. The exploit has been disclosed to the public and may be used.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
upsonicPyPI
< 0.56.00.56.0

Affected products

1
  • cpe:2.3:a:upsonic:upsonic:*:*:*:*:*:*:*:*
    Range: <=0.55.6

Patches

1
a54529acc6e4

Stability (#360)

https://github.com/Upsonic/UpsonicOnur ULUSOYJun 15, 2025via ghsa
88 files changed · +2146 11956
  • Dockerfile+0 101 removed
    @@ -1,101 +0,0 @@
    -# Use the official Ubuntu base image
    -FROM ubuntu:22.04
    -
    -ENV DEBIAN_FRONTEND=noninteractive
    -ENV TZ=Etc/UTC
    -ENV USER=docker
    -
    -RUN apt-get update && \
    -    apt-get install -y software-properties-common && \
    -    rm -rf /var/lib/apt/lists/*
    -RUN add-apt-repository ppa:deadsnakes/ppa
    -RUN apt-get update
    -
    -RUN apt-get update && apt-get install -y \
    -    xfce4 \
    -    xfce4-goodies \
    -    tightvncserver \
    -    xterm \
    -    wget \
    -    curl \
    -    xvfb \
    -    software-properties-common \
    -    tzdata \
    -    python3.12 python3.12-dev gcc \
    -    python3.12-tk libportaudio2 scrot libportaudio2
    -
    -RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.12
    -
    -RUN add-apt-repository ppa:mozillateam/ppa
    -RUN apt-get update && apt-get install -y firefox-esr
    -
    -RUN apt-get install -y gnome-screenshot
    -
    -
    -
    -
    -RUN apt-get remove -y xfce4-power-manager
    -
    -
    -RUN curl -sL https://deb.nodesource.com/setup_18.x -o /tmp/nodesource_setup.sh
    -RUN bash /tmp/nodesource_setup.sh
    -RUN apt-get install -y nodejs
    -
    -
    -# Add symbolic link for uvx
    -RUN ln -s /home/docker/.local/bin/uvx /usr/local/bin/uvx
    -
    -
    -
    -RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
    -
    -RUN touch /home/docker/.Xauthority
    -RUN chown docker:docker /home/docker/.Xauthority
    -
    -USER docker
    -
    -
    -
    -RUN curl -LsSf https://astral.sh/uv/install.sh | sh
    -
    -RUN mkdir /home/docker/.vnc
    -RUN echo "docker" | vncpasswd -f > /home/docker/.vnc/passwd
    -RUN chmod 600 /home/docker/.vnc/passwd
    -
    -RUN echo '#!/bin/bash\nxrdb $HOME/.Xresources\nstartxfce4 &' > /home/docker/.vnc/xstartup
    -RUN chmod +x /home/docker/.vnc/xstartup
    -
    -
    -
    -EXPOSE 5901
    -EXPOSE 7541
    -
    -RUN mkdir /home/docker/Upsonic
    -COPY Upsonic /home/docker/Upsonic
    -
    -
    -RUN python3.12 -m pip install --upgrade pip
    -RUN python3.12 -m pip install browser-use==0.1.36 langchain-openai langchain-anthropic langchain-community
    -RUN python3.12 -m playwright install
    -RUN python3.12 -m pip install /home/docker/Upsonic[server]
    -
    -
    -
    -ADD Upsonic/wallpaper.png /home/docker/Pictures/wallpaper.png
    -
    -# Configure VNC startup script
    -RUN echo '#!/bin/bash\n\
    -xrdb $HOME/.Xresources\n\
    -startxfce4 &\n\
    -sleep 2\n\
    -export XAUTHORITY=$HOME/.Xauthority\n\
    -export DISPLAY=:1\n\
    -xfconf-query -c xfce4-desktop -p /backdrop/screen0/monitor0/workspace0/last-image --create -t string -s /home/docker/Pictures/wallpaper.png\n' > /home/docker/.vnc/xstartup
    -RUN chmod +x /home/docker/.vnc/xstartup
    -
    -
    -
    -CMD /bin/bash -c "export DISPLAY=:1 && /usr/bin/vncserver :1 -geometry 1280x720 -depth 24 && \
    -    python3.12 -c 'from upsonic.server import run_main_server_internal; run_main_server_internal(reload=False)' & \
    -    python3.12 -c 'from upsonic.tools_server import run_tools_server_internal; run_tools_server_internal(reload=False)' & \
    -    wait"
    
  • .github/workflows/custom.yml+0 141 removed
    @@ -1,141 +0,0 @@
    -name: Custom Image
    -
    -on:
    -  workflow_dispatch:
    -    inputs:
    -      name:
    -        description: 'Name'
    -        required: true
    -        type: string
    -
    -
    -
    -permissions:
    -  packages: write
    -  contents: write
    -
    -jobs:
    -  ubuntu_amd64:
    -    runs-on: ubuntu-latest
    -
    -
    -
    -    steps:
    -      - name: Set Up Python
    -        uses: actions/setup-python@v2
    -        with:
    -          python-version: 3.8
    -      - uses: actions/checkout@v3
    -
    -
    -      - name: Set up QEMU
    -        uses: docker/setup-qemu-action@v3
    -
    -      - name: Set up Docker Buildx
    -        uses: docker/setup-buildx-action@v3
    -        with:
    -          platforms: linux/amd64,linux/arm64
    -
    -      - name: Adding required env vars for caching Docker build
    -        uses: actions/github-script@v7
    -        env:
    -          github-token: ${{ secrets.GITHUB_TOKEN }}
    -        with:
    -          script: |
    -            core.exportVariable('ACTIONS_CACHE_URL', process.env['ACTIONS_CACHE_URL'])
    -            core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env['ACTIONS_RUNTIME_TOKEN'])
    -            core.exportVariable('ACTIONS_RUNTIME_URL', process.env['ACTIONS_RUNTIME_URL'])
    -
    -      - name: Echo required env vars
    -        shell: bash
    -        run: |
    -          echo "ACTIONS_CACHE_URL: $ACTIONS_CACHE_URL"
    -          echo "ACTIONS_RUNTIME_TOKEN: $ACTIONS_RUNTIME_TOKEN"     
    -          echo "ACTIONS_RUNTIME_URL: $ACTIONS_RUNTIME_URL"        
    -
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -
    -
    -      - name: Build and Publish Docker Images
    -        env:
    -          VERSION: ${{ inputs.name }}
    -        run: |
    -          echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u "${{ github.actor }}" --password-stdin      
    -          git fetch
    -          cd ..
    -
    -          if [ -d models ]; then rm -r models; fi
    -          mkdir models
    -
    -          
    -          docker buildx build --platform linux/amd64 -f gca_docker/Dockerfile --push -t upsonic/gca_docker_ubuntu:$VERSION-AMD64 \
    -          --cache-to type=gha,mode=max \
    -          --cache-from type=gha .
    -          
    -
    -  ubuntu_arm64:
    -    runs-on: armlinux
    -
    -
    -
    -    steps:
    -      - name: Set Up Python
    -        uses: actions/setup-python@v2
    -        with:
    -          python-version: 3.8
    -      - uses: actions/checkout@v3
    -
    -
    -      - name: Set up QEMU
    -        uses: docker/setup-qemu-action@v3
    -
    -      - name: Set up Docker Buildx
    -        uses: docker/setup-buildx-action@v3
    -        with:
    -          platforms: linux/amd64,linux/arm64
    -
    -      - name: Adding required env vars for caching Docker build
    -        uses: actions/github-script@v7
    -        env:
    -          github-token: ${{ secrets.GITHUB_TOKEN }}
    -        with:
    -          script: |
    -            core.exportVariable('ACTIONS_CACHE_URL', process.env['ACTIONS_CACHE_URL'])
    -            core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env['ACTIONS_RUNTIME_TOKEN'])
    -            core.exportVariable('ACTIONS_RUNTIME_URL', process.env['ACTIONS_RUNTIME_URL'])
    -
    -      - name: Echo required env vars
    -        shell: bash
    -        run: |
    -          echo "ACTIONS_CACHE_URL: $ACTIONS_CACHE_URL"
    -          echo "ACTIONS_RUNTIME_TOKEN: $ACTIONS_RUNTIME_TOKEN"     
    -          echo "ACTIONS_RUNTIME_URL: $ACTIONS_RUNTIME_URL"        
    -
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -
    -      - name: Build and Publish Docker Images
    -        env:
    -          VERSION: ${{ inputs.name }}
    -        run: |
    -          echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u "${{ github.actor }}" --password-stdin      
    -          git fetch
    -          cd ..
    -
    -          if [ -d models ]; then rm -r models; fi
    -          mkdir models
    -
    -          
    -          docker buildx build --platform linux/arm64 -f gca_docker/Dockerfile --push -t upsonic/gca_docker_ubuntu:$VERSION-ARM64 \
    -          --cache-to type=gha,mode=max \
    -          --cache-from type=gha .
    -          
    \ No newline at end of file
    
  • .github/workflows/publish.yml+0 175 modified
    @@ -30,178 +30,3 @@ jobs:
           - name: Publish
             run: uv publish -t ${{ secrets.THE_PYPI_TOKEN }}
     
    -  build_docker_amd:
    -    name: "Build and Publish for AMD"
    -    runs-on: ubuntu-latest
    -    needs: pypi
    -
    -    steps:
    -      - uses: actions/checkout@v4
    -
    -      - name: Set up Docker Buildx
    -        uses: docker/setup-buildx-action@v3
    -        with:
    -          version: latest
    -          driver-opts: |
    -            image=moby/buildkit:master
    -
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -      - name: Get version from tag
    -        id: get_version
    -        run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
    -
    -      - name: Build and Publish Docker Images
    -        env:
    -          VERSION: ${{ steps.get_version.outputs.VERSION }}
    -        run: |
    -          cd ..
    -          # Build for AMD64
    -          docker buildx create --use --name single-arch-builder
    -          docker buildx build --platform linux/amd64 \
    -            -f Upsonic/Dockerfile \
    -            --push \
    -            --load \
    -            -t upsonic/server:$VERSION-amd64 \
    -            -t upsonic/server:latest-amd64 \
    -            .
    -          docker push upsonic/server:$VERSION-amd64
    -          docker push upsonic/server:latest-amd64
    -
    -  build_docker_arm:
    -    name: "Build and Publish for ARM"
    -    runs-on: ubuntu-24.04-arm
    -    needs: pypi
    -
    -    steps:
    -      - uses: actions/checkout@v4
    -
    -      - name: Set up Docker Buildx
    -        uses: docker/setup-buildx-action@v3
    -        with:
    -          version: latest
    -          driver-opts: |
    -            image=moby/buildkit:master
    -
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -      - name: Get version from tag
    -        id: get_version
    -        run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
    -
    -      - name: Build and Publish Docker Images for ARM
    -        env:
    -          VERSION: ${{ steps.get_version.outputs.VERSION }}
    -        run: |
    -          cd ..
    -          # Build for ARM64
    -          docker buildx create --use --name single-arch-builder
    -          docker buildx build --platform linux/arm64 \
    -            -f Upsonic/Dockerfile \
    -            --push \
    -            --load \
    -            -t upsonic/server:$VERSION-arm64 \
    -            -t upsonic/server:latest-arm64 \
    -            .
    -          docker push upsonic/server:$VERSION-arm64
    -          docker push upsonic/server:latest-arm64
    -
    -  create_manifest:
    -    name: "Create Multi-Architecture Manifest"
    -    needs: [build_docker_amd, build_docker_arm]
    -    runs-on: ubuntu-latest
    -
    -    steps:
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -      - name: Get version from tag
    -        id: get_version
    -        run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
    -
    -      - name: Create and Push Manifest
    -        env:
    -          VERSION: ${{ steps.get_version.outputs.VERSION }}
    -          DOCKER_CLI_EXPERIMENTAL: enabled
    -          DOCKER_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
    -          DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
    -        run: |
    -          mkdir -p ~/.docker
    -          
    -          # Create proper config with auth
    -          echo "{
    -            \"experimental\": \"enabled\",
    -            \"auths\": {
    -              \"https://index.docker.io/v1/\": {
    -                \"auth\": \"$(echo -n ${DOCKER_USERNAME}:${DOCKER_PASSWORD} | base64)\"
    -              }
    -            }
    -          }" > ~/.docker/config.json
    -          
    -          # Ensure we're logged in via docker login command as well
    -          echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
    -          
    -          # Pull the images first to ensure we have the correct manifests locally
    -          docker pull upsonic/server:$VERSION-amd64
    -          docker pull upsonic/server:$VERSION-arm64
    -          docker pull upsonic/server:latest-amd64
    -          docker pull upsonic/server:latest-arm64
    -          
    -          # Remove existing manifests if they exist
    -          docker manifest rm upsonic/server:$VERSION || true
    -          docker manifest rm upsonic/server:latest || true
    -          
    -          # Create and push the version manifest
    -          docker manifest create upsonic/server:$VERSION \
    -            upsonic/server:$VERSION-amd64 \
    -            upsonic/server:$VERSION-arm64
    -
    -          # Create and push the latest manifest
    -          docker manifest create upsonic/server:latest \
    -            upsonic/server:latest-amd64 \
    -            upsonic/server:latest-arm64
    -
    -          # Annotate the version manifest
    -          docker manifest annotate upsonic/server:$VERSION \
    -            upsonic/server:$VERSION-amd64 --arch amd64 --os linux
    -          docker manifest annotate upsonic/server:$VERSION \
    -            upsonic/server:$VERSION-arm64 --arch arm64 --os linux
    -
    -          # Annotate the latest manifest
    -          docker manifest annotate upsonic/server:latest \
    -            upsonic/server:latest-amd64 --arch amd64 --os linux
    -          docker manifest annotate upsonic/server:latest \
    -            upsonic/server:latest-arm64 --arch arm64 --os linux
    -
    -          # Inspect before pushing to verify
    -          docker manifest inspect upsonic/server:$VERSION
    -          docker manifest inspect upsonic/server:latest
    -
    -          # Push the manifests with retries
    -          for manifest in "$VERSION" "latest"; do
    -            for i in 1 2 3; do
    -              if docker manifest push --purge upsonic/server:$manifest; then
    -                echo "Successfully pushed manifest for $manifest"
    -                break
    -              fi
    -              echo "Push attempt $i failed for $manifest, retrying..."
    -              # Re-login before retry
    -              echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
    -              sleep 5
    -              if [ $i -eq 3 ]; then
    -                echo "Failed to push manifest for $manifest after 3 attempts"
    -                exit 1
    -              fi
    -            done
    -          done
    
  • .github/workflows/test_publisher.yml+0 141 modified
    @@ -52,144 +52,3 @@ jobs:
           - name: Publish to TestPyPI
             run: uv publish -t ${{ secrets.THE_PYPI_TOKEN }}
     
    -  build_docker_amd:
    -    name: "Build and Publish for AMD"
    -    needs: generate_version
    -    runs-on: ubuntu-latest
    -
    -    steps:
    -      - uses: actions/checkout@v4
    -
    -      - name: Set up Docker Buildx
    -        uses: docker/setup-buildx-action@v3
    -        with:
    -          version: latest
    -          driver-opts: |
    -            image=moby/buildkit:master
    -
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -      - name: Build and Publish Docker Images
    -        env:
    -          VERSION: ${{ needs.generate_version.outputs.random_version }}
    -        run: |
    -          cd ..
    -          # Build for AMD64
    -          docker buildx create --use --name single-arch-builder
    -          docker buildx build --platform linux/amd64 \
    -            -f Upsonic/Dockerfile \
    -            --push \
    -            --load \
    -            -t upsonic/server_test:$VERSION-amd64 \
    -            .
    -          docker push upsonic/server_test:$VERSION-amd64
    -
    -  build_docker_arm:
    -    name: "Build and Publish for ARM"
    -    needs: generate_version
    -    runs-on: ubuntu-24.04-arm
    -
    -    steps:
    -      - uses: actions/checkout@v4
    -
    -      - name: Set up Docker Buildx
    -        uses: docker/setup-buildx-action@v3
    -        with:
    -          version: latest
    -          driver-opts: |
    -            image=moby/buildkit:master
    -
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -      - name: Build and Publish Docker Images for ARM
    -        env:
    -          VERSION: ${{ needs.generate_version.outputs.random_version }}
    -        run: |
    -          cd ..
    -          # Build for ARM64
    -          docker buildx create --use --name single-arch-builder
    -          docker buildx build --platform linux/arm64 \
    -            -f Upsonic/Dockerfile \
    -            --push \
    -            --load \
    -            -t upsonic/server_test:$VERSION-arm64 \
    -            .
    -          docker push upsonic/server_test:$VERSION-arm64
    -
    -  create_manifest:
    -    name: "Create Multi-Architecture Manifest"
    -    needs: [generate_version, build_docker_amd, build_docker_arm]
    -    runs-on: ubuntu-latest
    -
    -    steps:
    -      - name: Login to Docker Hub
    -        uses: docker/login-action@v3
    -        with:
    -          username: ${{ secrets.DOCKERHUB_USERNAME }}
    -          password: ${{ secrets.DOCKERHUB_TOKEN }}
    -
    -      - name: Create and Push Manifest
    -        env:
    -          VERSION: ${{ needs.generate_version.outputs.random_version }}
    -          DOCKER_CLI_EXPERIMENTAL: enabled
    -          DOCKER_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
    -          DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
    -        run: |
    -          mkdir -p ~/.docker
    -          
    -          # Create proper config with auth
    -          echo "{
    -            \"experimental\": \"enabled\",
    -            \"auths\": {
    -              \"https://index.docker.io/v1/\": {
    -                \"auth\": \"$(echo -n ${DOCKER_USERNAME}:${DOCKER_PASSWORD} | base64)\"
    -              }
    -            }
    -          }" > ~/.docker/config.json
    -          
    -          # Ensure we're logged in via docker login command as well
    -          echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
    -          
    -          # Pull the images first to ensure we have the correct manifests locally
    -          docker pull upsonic/server_test:$VERSION-amd64
    -          docker pull upsonic/server_test:$VERSION-arm64
    -          
    -          # Remove existing manifest if it exists
    -          docker manifest rm upsonic/server_test:$VERSION || true
    -          
    -          # Create and push the manifest
    -          docker manifest create upsonic/server_test:$VERSION \
    -            upsonic/server_test:$VERSION-amd64 \
    -            upsonic/server_test:$VERSION-arm64
    -
    -          # Annotate the manifest with architecture and OS information
    -          docker manifest annotate upsonic/server_test:$VERSION \
    -            upsonic/server_test:$VERSION-amd64 --arch amd64 --os linux
    -          docker manifest annotate upsonic/server_test:$VERSION \
    -            upsonic/server_test:$VERSION-arm64 --arch arm64 --os linux
    -
    -          # Inspect before pushing to verify
    -          docker manifest inspect upsonic/server_test:$VERSION
    -
    -          # Push the manifest with retries
    -          for i in 1 2 3; do
    -            if docker manifest push --purge upsonic/server_test:$VERSION; then
    -              echo "Successfully pushed manifest"
    -              exit 0
    -            fi
    -            echo "Push attempt $i failed, retrying..."
    -            # Re-login before retry
    -            echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
    -            sleep 5
    -          done
    -          
    -          echo "Failed to push manifest after 3 attempts"
    -          exit 1
    \ No newline at end of file
    
  • .github/workflows/unit_tests.yml+0 96 removed
    @@ -1,96 +0,0 @@
    -name: Unit Tests
    -
    -on:
    -  push:
    -    branches:
    -      - '**'
    -    paths:
    -      - 'src/**'
    -      - '.github/**'
    -
    -jobs:
    -  server-tests:
    -    runs-on: ubuntu-latest
    -
    -    steps:
    -    - name: Checkout repository
    -      uses: actions/checkout@v2
    -
    -    - name: Set up Python
    -      uses: actions/setup-python@v2
    -      with:
    -        python-version: '3.12'
    -
    -    - name: Install system dependencies
    -      run: |
    -        sudo apt-get update
    -        sudo apt-get install -y xvfb python3-tk python3-dev scrot
    -
    -    - name: Install UV
    -      run: |
    -        pip install uv
    -
    -    - name: Run UV Sync
    -      run: |
    -        uv sync --all-groups --all-extras
    -
    -    - name: Set Environment Variables
    -      run: |
    -        echo "AZURE_OPENAI_ENDPOINT=${{ secrets.AZURE_OPENAI_ENDPOINT }}" >> $GITHUB_ENV
    -        echo "AZURE_OPENAI_API_VERSION=${{ secrets.AZURE_OPENAI_API_VERSION }}" >> $GITHUB_ENV
    -        echo "AZURE_OPENAI_API_KEY=${{ secrets.AZURE_OPENAI_API_KEY }}" >> $GITHUB_ENV
    -
    -    - name: Run Server Tests
    -      run: |
    -        # Start Xvfb
    -        Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 &
    -        # Export display for GUI applications
    -        export DISPLAY=:99
    -        # Wait for Xvfb to start
    -        sleep 3
    -        # Run the tests
    -        uv run run_tools_server.py & uv run run_main_server.py & sleep 10 && uv run pytest tests/server/
    -
    -
    -  client-tests:
    -    runs-on: ubuntu-latest
    -
    -    steps:
    -    - name: Checkout repository
    -      uses: actions/checkout@v2
    -
    -    - name: Set up Python
    -      uses: actions/setup-python@v2
    -      with:
    -        python-version: '3.12'
    -
    -    - name: Install system dependencies
    -      run: |
    -        sudo apt-get update
    -        sudo apt-get install -y xvfb python3-tk python3-dev scrot
    -
    -    - name: Install UV
    -      run: |
    -        pip install uv
    -
    -    - name: Run UV Sync
    -      run: |
    -        uv sync --all-groups --all-extras
    -
    -    - name: Set Environment Variables
    -      run: |
    -        echo "AZURE_OPENAI_ENDPOINT=${{ secrets.AZURE_OPENAI_ENDPOINT }}" >> $GITHUB_ENV
    -        echo "AZURE_OPENAI_API_VERSION=${{ secrets.AZURE_OPENAI_API_VERSION }}" >> $GITHUB_ENV
    -        echo "AZURE_OPENAI_API_KEY=${{ secrets.AZURE_OPENAI_API_KEY }}" >> $GITHUB_ENV
    -
    -
    -    - name: Run Client Tests
    -      run: |
    -        # Start Xvfb
    -        Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 &
    -        # Export display for GUI applications
    -        export DISPLAY=:99
    -        # Wait for Xvfb to start
    -        sleep 3
    -        # Run the tests
    -        uv run run_tools_server.py & uv run run_main_server.py & sleep 10 && uv run pytest tests/server/
    \ No newline at end of file
    
  • pyproject.toml+4 10 modified
    @@ -8,31 +8,25 @@ authors = [
     ]
     requires-python = ">=3.10"
     dependencies = [
    -    "cloudpickle>=3.1.0",
    -    "dill>=0.3.9",
    -    "httpx>=0.27.2",
         "psutil==6.1.1",
         "rich>=13.9.4",
         "sentry-sdk[opentelemetry]>=2.19.2",
         "toml>=0.10.2",
         "uv>=0.5.20",
    -    "fastapi>=0.115.6",
    -    "mcp[cli]==1.5.0",
    -    "pydantic-ai==0.1.3",
    +    "mcp[cli]==1.9.0",
    +    "pydantic-ai==0.2.11",
         "python-dotenv>=1.0.1",
         "uvicorn>=0.34.0",
         "beautifulsoup4>=4.12.3",
         "boto3>=1.35.99",
         "botocore>=1.35.99",
         "google>=3.0.0",
    -    "markitdown==0.0.1",
    -    "matplotlib>=3.10.0",
    -    "pyautogui>=0.9.54",
    +    "markitdown[all]==0.0.1",
         "python-multipart>=0.0.20",
         "requests>=2.32.3",
         "duckduckgo-search>=7.3.1",
         "nest-asyncio>=1.6.0",
    -    "pydantic-ai-slim[anthropic,bedrock,openai]>=0.0.45",
    +    "pydantic-ai-slim[anthropic,bedrock,openai,mcp]>=0.0.45",
         "pydantic==2.10.5",
     ]
     
    
  • run_main_server.py+0 5 removed
    @@ -1,5 +0,0 @@
    -from upsonic.server import run_main_server_internal
    -
    -if __name__ == "__main__":
    -    run_main_server_internal()
    -
    
  • run_tools_server.py+0 5 removed
    @@ -1,5 +0,0 @@
    -from upsonic.tools_server import run_tools_server_internal
    -
    -if __name__ == "__main__":
    -    run_tools_server_internal()
    -    
    \ No newline at end of file
    
  • src/upsonic/client/agent_configuration/agent_configuration.py+0 393 removed
    @@ -1,393 +0,0 @@
    -from dataclasses import Field
    -import uuid
    -from pydantic import BaseModel
    -import asyncio
    -import subprocess
    -import sys
    -
    -from typing import Any, List, Dict, Optional, Type, Union
    -
    -from ..knowledge_base.knowledge_base import KnowledgeBase
    -from ..tasks.tasks import Task
    -from ..printing import mcp_tool_operation, tool_operation, error_message
    -
    -from ..latest_upsonic_client import latest_upsonic_client
    -from ...model_registry import ModelNames
    -
    -
    -def register_tools(client, tools):
    -    """Register tools with the client."""
    -    if tools is not None:
    -        for tool in tools:
    -            # Handle special tool classes from upsonic.client.tools
    -            if tool.__module__ == 'upsonic.client.tools':
    -                client.tool()(tool)
    -                continue
    -                
    -            # If tool is a class (not an instance)
    -            if isinstance(tool, type):
    -                if hasattr(tool, 'command'):
    -                    # Check if command is UVX and UV is not installed
    -                    if hasattr(tool, 'command') and tool.command == 'uvx':
    -                        try:
    -                            # Try to run uv --version to check if it's installed
    -                            subprocess.run(['uv', '--version'], capture_output=True, check=True)
    -                        except (subprocess.CalledProcessError, FileNotFoundError):
    -                            error_message(
    -                                "UV Installation Error",
    -                                "UV is not installed. Please install UV",
    -                                error_code=500
    -                            )
    -                            sys.exit(1)
    -                    
    -                    # Check if command is NPX and Node.js is not installed
    -                    if hasattr(tool, 'command') and tool.command == 'npx':
    -                        try:
    -                            # Try to run node --version to check if Node.js is installed
    -                            subprocess.run(['node', '--version'], capture_output=True, check=True)
    -                        except (subprocess.CalledProcessError, FileNotFoundError):
    -                            error_message(
    -                                "Node.js Installation Error",
    -                                "Node.js is not installed. Please install Node.js to use tools with NPX command.",
    -                                error_code=500
    -                            )
    -                            sys.exit(1)
    -
    -                    client.mcp()(tool)
    -                elif hasattr(tool, 'url'):
    -                    client.sse_mcp()(tool)
    -                else:
    -                    client.tool()(tool)
    -            else:
    -                # Get all attributes of the tool instance/object
    -                tool_attrs = dir(tool)
    -                
    -                # Filter out special methods and get only callable attributes
    -                functions = [attr for attr in tool_attrs 
    -                           if not attr.startswith('__') and callable(getattr(tool, attr))]
    -                
    -                if functions:
    -                    # If the tool has functions, use the tool() decorator
    -                    tool_operation(f"Tool: {tool.__class__.__name__}", "Successfully Registered")
    -
    -                    if not isinstance(tool, object):
    -                        client.tool()(tool.__class__)
    -                    else:
    -                        client.tool()(tool)
    -                else:
    -                    # If the tool has no functions, use mcp()
    -                    mcp_tool_operation(f"MCP Tool: {tool.__class__.__name__}", "Successfully Registered")
    -                    client.mcp()(tool.__class__)
    -    return client
    -
    -
    -def get_or_create_client(debug: bool = False):
    -    """Get existing client or create a new one."""
    -    
    -    global latest_upsonic_client
    -    
    -    if latest_upsonic_client is not None:
    -        # Check if the existing client's status is False
    -        if not latest_upsonic_client.status():
    -            from ..base import UpsonicClient
    -            new_client = UpsonicClient("localserver", debug=debug)
    -            latest_upsonic_client = new_client
    -        return latest_upsonic_client
    -    
    -    from ..base import UpsonicClient
    -    the_client = UpsonicClient("localserver", debug=debug)
    -    latest_upsonic_client = the_client
    -    return the_client
    -
    -
    -def execute_task(agent_config, task: Task, debug: bool = False):
    -    """Execute a task with the given agent configuration."""
    -    import asyncio
    -    
    -    try:
    -        # Check if there's a running event loop
    -        loop = asyncio.get_running_loop()
    -        if loop.is_running():
    -            # If there's a running loop, run the coroutine in that loop
    -            return asyncio.run_coroutine_threadsafe(
    -                execute_task_async(agent_config, task, debug), 
    -                loop
    -            ).result()
    -    except RuntimeError:
    -        # No running event loop
    -        pass
    -    
    -    # If no running loop or exception occurred, create a new one
    -    return asyncio.run(execute_task_async(agent_config, task, debug))
    -
    -async def execute_task_async(agent_config, task: Task, debug: bool = False):
    -    """Execute a task with the given agent configuration asynchronously using true async methods."""
    -    global latest_upsonic_client
    -    
    -    # If agent has a custom client, use it
    -    if hasattr(agent_config, 'client') and agent_config.client is not None:
    -        the_client = agent_config.client
    -    else:
    -        # Get or create client using existing process
    -        the_client = get_or_create_client(debug=debug)
    -    
    -    # If task has no tools defined but agent has tools, use the agent's tools
    -    if not task.tools and hasattr(agent_config, 'tools') and agent_config.tools:
    -        task.tools = agent_config.tools
    -    
    -    # Register tools if needed
    -    the_client = register_tools(the_client, task.tools)
    -    
    -    # Use the async run method directly
    -    await the_client.run_async(agent_config, task)
    -    
    -    return task.response
    -
    -
    -class AgentConfiguration(BaseModel):
    -
    -
    -    agent_id_: Optional[str] = None
    -    job_title: str
    -    company_url: Optional[str] = None
    -    company_objective: Optional[str] = None
    -    name: str = ""
    -    contact: str = ""
    -    model: str = "openai/gpt-4o"
    -    client: Any = None  # Add client parameter
    -    debug: bool = False
    -    reliability_layer: Any = None  # Changed to Any to accept any class or instance
    -    system_prompt: Optional[str] = None
    -    tools: List[Any] = []
    -    retry: int = 3
    -
    -
    -    sub_task: bool = True
    -    reflection: bool = False
    -    memory: bool = False
    -    caching: bool = True
    -    cache_expiry: int = 60 * 60
    -    knowledge_base: Optional[KnowledgeBase] = None
    -    context_compress: bool = False
    -
    -    def __init__(
    -        self, 
    -        job_title: str, 
    -        company_url: Optional[str] = None, 
    -        company_objective: Optional[str] = None,
    -        name: str = "",
    -        contact: str = "",
    -        model: ModelNames = "openai/gpt-4o",
    -        client: Any = None,
    -        debug: bool = False,
    -        reliability_layer: Any = None,
    -        system_prompt: Optional[str] = None,
    -        tools: Optional[List[Any]] = None,
    -        sub_task: bool = True,
    -        reflection: bool = False,
    -        memory: bool = False,
    -        caching: bool = True,
    -        cache_expiry: int = 60 * 60,
    -        knowledge_base: Optional[KnowledgeBase] = None,
    -        context_compress: bool = False,
    -        agent_id_: Optional[str] = None,
    -        retry: int = 3,
    -        **data
    -    ):
    -        if job_title is not None:
    -            data["job_title"] = job_title
    -        if client is not None:
    -            data["client"] = client
    -        
    -        if tools is None:
    -            tools = []
    -            
    -        data.update({
    -            "agent_id_": agent_id_,
    -            "company_url": company_url,
    -            "company_objective": company_objective,
    -            "name": name,
    -            "contact": contact,
    -            "model": model,
    -            "debug": debug,
    -            "reliability_layer": reliability_layer,
    -            "system_prompt": system_prompt,
    -            "tools": tools,
    -            "sub_task": sub_task,
    -            "retry": retry,
    -            "reflection": reflection,
    -            "memory": memory,
    -            "caching": caching,
    -            "cache_expiry": cache_expiry,
    -            "knowledge_base": knowledge_base,
    -            "context_compress": context_compress
    -        })
    -
    -        super().__init__(**data)
    -        self.validate_tools()
    -
    -    def validate_tools(self):
    -        """
    -        Validates each tool in the tools list.
    -        If a tool is a class and has a __control__ method, runs that method to verify it returns True.
    -        Raises an exception if the __control__ method returns False or raises an exception.
    -        """
    -        if not self.tools:
    -            return
    -            
    -        for tool in self.tools:
    -            # Check if the tool is a class
    -            if isinstance(tool, type) or hasattr(tool, '__class__'):
    -                # Check if the class has a __control__ method
    -                if hasattr(tool, '__control__') and callable(getattr(tool, '__control__')):
    -                    try:
    -                        # Run the __control__ method
    -                        control_result = tool.__control__()
    -                        if not control_result:
    -                            raise ValueError(f"Tool {tool} __control__ method returned False")
    -                    except Exception as e:
    -                        # Re-raise any exceptions from the __control__ method
    -                        raise ValueError(f"Error validating tool {tool}: {str(e)}")
    -
    -
    -    @property
    -    def agent_id(self):
    -        if self.agent_id_ is None:
    -            self.agent_id_ = str(uuid.uuid4())
    -        return self.agent_id_
    -    
    -    def do(self, task: Task):
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.do_async(task), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.do_async(task))
    -    
    -    async def do_async(self, task: Task):
    -        """Asynchronous version of the do method."""
    -        return await execute_task_async(self, task, self.debug)
    -    
    -    def print_do(self, task: Task):
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                result = asyncio.run_coroutine_threadsafe(
    -                    self.print_do_async(task), 
    -                    loop
    -                ).result()
    -                return result
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.print_do_async(task))
    -        
    -    async def print_do_async(self, task: Task):
    -        """Asynchronous version of the print_do method."""
    -        result = await self.do_async(task)
    -        print(result)
    -        return result
    -    
    -    def parallel_do(self, tasks: List[Task]):
    -        """Execute multiple tasks in parallel and return their results.
    -        
    -        Args:
    -            tasks: A list of Task objects to execute in parallel
    -            
    -        Returns:
    -            A list of task responses in the same order as the input tasks
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.parallel_do_async(tasks), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.parallel_do_async(tasks))
    -    
    -    async def parallel_do_async(self, tasks: List[Task]):
    -        """Asynchronous version of the parallel_do method.
    -        
    -        Args:
    -            tasks: A list of Task objects to execute in parallel
    -            
    -        Returns:
    -            A list of task responses in the same order as the input tasks
    -        """
    -        # Create a list of coroutines for each task
    -        coroutines = [self.do_async(task) for task in tasks]
    -        
    -        # Execute all tasks in parallel and return their results
    -        return await asyncio.gather(*coroutines)
    -    
    -    def parallel_print_do(self, tasks: List[Task]):
    -        """Execute multiple tasks in parallel, print their results, and return them.
    -        
    -        Args:
    -            tasks: A list of Task objects to execute in parallel
    -            
    -        Returns:
    -            A list of task responses in the same order as the input tasks
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.parallel_print_do_async(tasks), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.parallel_print_do_async(tasks))
    -    
    -    async def parallel_print_do_async(self, tasks: List[Task]):
    -        """Asynchronous version of the parallel_print_do method.
    -        
    -        Args:
    -            tasks: A list of Task objects to execute in parallel
    -            
    -        Returns:
    -            A list of task responses in the same order as the input tasks
    -        """
    -        # Execute all tasks in parallel
    -        results = await self.parallel_do_async(tasks)
    -        
    -        # Print each result
    -        for result in results:
    -            print(result)
    -        
    -        return results
    
  • src/upsonic/client/base.py+0 261 removed
    @@ -1,261 +0,0 @@
    -from pydantic import BaseModel
    -from typing import Dict, Any, Any
    -import httpx
    -import time
    -import asyncio
    -import concurrent.futures
    -import threading
    -
    -
    -from .level_one.call import Call
    -from .level_two.agent import Agent
    -from .tasks.tasks import Task
    -from .agent_configuration.agent_configuration import AgentConfiguration
    -from .storage.storage import Storage, ClientConfig
    -from .tools.tools import Tools
    -from .markdown.markdown import Markdown
    -from .others.others import Others
    -from ..exception import ServerStatusException, TimeoutException
    -
    -from .printing import connected_to_server
    -
    -
    -from .latest_upsonic_client import latest_upsonic_client
    -
    -
    -# Helper function to run a coroutine in a new thread with a new event loop
    -def run_coroutine_in_new_thread(coro):
    -    """
    -    Run a coroutine in a new thread with a new event loop.
    -    This is useful when we're in an async context but need a synchronous result.
    -    
    -    Args:
    -        coro: The coroutine to run
    -        
    -    Returns:
    -        The result of the coroutine
    -    """
    -    def run_coro_in_thread(coro):
    -        loop = asyncio.new_event_loop()
    -        asyncio.set_event_loop(loop)
    -        try:
    -            return loop.run_until_complete(coro)
    -        finally:
    -            loop.close()
    -    
    -    with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
    -        return executor.submit(run_coro_in_thread, coro).result()
    -
    -
    -# Create a base class with url
    -class UpsonicClient(Call, Storage, Tools, Agent, Markdown, Others):
    -
    -    def __init__(self, url: str, debug: bool = False, **kwargs):
    -        """Initialize the Upsonic client.
    -        
    -        Args:
    -            url: The server URL to connect to
    -            debug: Whether to enable debug mode
    -            **kwargs: Configuration options that match ClientConfig fields
    -        """
    -        start_time = time.time()
    -        self.debug = debug
    -
    -        # Set server type and URL first
    -        if "0.0.0.0" in url:
    -            self.server_type = "Local(Docker)"
    -        elif "localhost" in url:
    -            self.server_type = "Local(Docker)"
    -        elif "upsonic.ai" in url:
    -            self.server_type = "Cloud(Upsonic)"
    -        elif "devserver" in url or "localserver" in url:
    -            self.server_type = "Local(LocalServer)"
    -        else:
    -            self.server_type = "Cloud(Unknown)"
    -
    -        # Handle local server setup
    -        if url == "devserver" or url == "localserver":
    -            url = "http://localhost:7541"
    -            from ..server import run_dev_server, stop_dev_server, is_tools_server_running, is_main_server_running
    -            if debug:
    -                run_dev_server(redirect_output=False)
    -            else:
    -                run_dev_server(redirect_output=True)
    -
    -            import atexit
    -            def exit_handler():
    -                if is_tools_server_running() or is_main_server_running():
    -                    stop_dev_server()
    -            atexit.register(exit_handler)
    -
    -        # Set URL and default model
    -        self.url = url
    -        self.default_llm_model = "openai/gpt-4o"
    -
    -        # Check if we're in an async context
    -        try:
    -            loop = asyncio.get_running_loop()
    -            in_async_context = True
    -        except RuntimeError:
    -            in_async_context = False
    -
    -        # Check server status before proceeding
    -        if in_async_context:
    -            # We're in an async context, but __init__ can't be async
    -            # We need to run the async method in a new thread
    -            status_ok = run_coroutine_in_new_thread(self.status_async())
    -        else:
    -            # We're not in an async context, use asyncio.run
    -            status_ok = asyncio.run(self.status_async())
    -            
    -        if not status_ok:
    -            total_time = time.time() - start_time
    -            connected_to_server(self.server_type, "Failed", total_time)
    -            raise ServerStatusException("Failed to connect to the server at initialization.")
    -        
    -        # Handle configuration through ClientConfig model
    -        config = ClientConfig(**(kwargs or {}))
    -        
    -        # Create a dictionary of non-None values
    -        config_dict = {
    -            key: str(value) for key, value in config.model_dump().items() 
    -            if value is not None
    -        }
    -        
    -        # Bulk set the configurations if there are any
    -        if config_dict:
    -            if in_async_context:
    -                # We're in an async context, but __init__ can't be async
    -                # We need to run the async method in a new thread
    -                run_coroutine_in_new_thread(self.bulk_set_config_async(config_dict))
    -            else:
    -                # We're not in an async context, use asyncio.run
    -                asyncio.run(self.bulk_set_config_async(config_dict))
    -
    -        global latest_upsonic_client
    -        latest_upsonic_client = self
    -        total_time = time.time() - start_time
    -        connected_to_server(self.server_type, "Established", total_time)
    -
    -    def status(self) -> bool:
    -        """Check the server status."""
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    return run_coroutine_in_new_thread(self.status_async())
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.status_async())
    -        except httpx.RequestError:
    -            return False
    -
    -    async def status_async(self) -> bool:
    -        """Check the server status asynchronously."""
    -        try:
    -            async with httpx.AsyncClient() as client:
    -                response = await client.get(self.url + "/status")
    -                return response.status_code == 200
    -        except httpx.RequestError:
    -            return False
    -
    -    def send_request(self, endpoint: str, data: Dict[str, Any], files: Dict[str, Any] = None, method: str = "POST", return_raw: bool = False) -> Any:
    -        """
    -        General method to send an API request.
    -
    -        Args:
    -            endpoint: The API endpoint to send the request to.
    -            data: The data to send in the request.
    -            files: Optional files to upload.
    -            method: HTTP method to use (GET or POST)
    -            return_raw: Whether to return the raw response content instead of JSON
    -
    -        Returns:
    -            The response from the API, either as JSON or raw content.
    -        """
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    return run_coroutine_in_new_thread(
    -                        self.send_request_async(endpoint, data, files, method, return_raw)
    -                    )
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.send_request_async(endpoint, data, files, method, return_raw))
    -        except httpx.RequestError as e:
    -            raise e
    -
    -    async def send_request_async(self, endpoint: str, data: Dict[str, Any], files: Dict[str, Any] = None, method: str = "POST", return_raw: bool = False) -> Any:
    -        """
    -        Asynchronous version of send_request.
    -        General method to send an API request asynchronously.
    -
    -        Args:
    -            endpoint: The API endpoint to send the request to.
    -            data: The data to send in the request.
    -            files: Optional files to upload.
    -            method: HTTP method to use (GET or POST)
    -            return_raw: Whether to return the raw response content instead of JSON
    -
    -        Returns:
    -            The response from the API, either as JSON or raw content.
    -        """
    -        async with httpx.AsyncClient() as client:
    -            if method.upper() == "GET":
    -                response = await client.get(self.url + endpoint, params=data, timeout=600.0)
    -            else:
    -                if files:
    -                    response = await client.post(self.url + endpoint, data=data, files=files, timeout=600.0)
    -                else:
    -                    response = await client.post(self.url + endpoint, json=data, timeout=600.0)
    -                
    -            if response.status_code == 408:
    -                raise TimeoutException("Request timed out")
    -            response.raise_for_status()
    -            
    -            return response.content if return_raw else response.json()
    -
    -    def run(self, *args, **kwargs):
    -        """
    -        Run method that delegates to the appropriate async implementation.
    -        """
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    return run_coroutine_in_new_thread(
    -                        self.run_async(*args, **kwargs)
    -                    )
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.run_async(*args, **kwargs))
    -        except Exception as e:
    -            raise e
    -
    -    async def run_async(self, *args, **kwargs):
    -        """
    -        Asynchronous version of the run method.
    -        """
    -        llm_model = kwargs.get("llm_model", None)
    -
    -        # If there is an two positional arguments we will run it in self.agent_async(first argument, second argument)
    -        if len(args) == 2:
    -            
    -            if isinstance(args[0], AgentConfiguration) and isinstance(args[1], Task):
    -                return await self.agent_async(args[0], args[1])
    -            elif isinstance(args[0], list):
    -                return await self.multi_agent_async(args[0], args[1])
    -        
    -
    -        if len(args) == 1:
    -            return await self.call_async(args[0], llm_model=llm_model)
    \ No newline at end of file
    
  • src/upsonic/client/direct_llm_call/direct_llm_cal.py+0 321 removed
    @@ -1,321 +0,0 @@
    -from ..agent_configuration.agent_configuration import get_or_create_client, register_tools
    -from ..tasks.tasks import Task
    -from typing import Any, Callable, TypeVar, cast
    -
    -T = TypeVar('T')
    -
    -from ...model_registry import ModelNames
    -from ..printing import print_price_id_summary
    -
    -class DirectStatic:
    -    """Static methods for making direct LLM calls using the Upsonic client."""
    -    
    -    @staticmethod
    -    def do(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        """
    -        Execute a direct LLM call with the given task and model.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (default: "openai/gpt-4")
    -            client: Optional custom client to use instead of creating a new one
    -            debug: Whether to enable debug mode
    -            retry: Number of retries for failed calls (default: 3)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    DirectStatic.do_async(task, model, client, debug, retry), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(DirectStatic.do_async(task, model, client, debug, retry))
    -
    -    @staticmethod
    -    async def do_async(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        """
    -        Execute a direct LLM call with the given task and model asynchronously.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (default: "openai/gpt-4")
    -            client: Optional custom client to use instead of creating a new one
    -            debug: Whether to enable debug mode
    -            retry: Number of retries for failed calls (default: 3)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        global latest_upsonic_client
    -        from ..latest_upsonic_client import latest_upsonic_client
    -
    -        # Use provided client or get/create one
    -        if client is not None:
    -            the_client = client
    -        else:
    -            the_client = get_or_create_client(debug=debug)
    -        
    -        # Register tools if needed
    -        the_client = register_tools(the_client, task.tools)
    -
    -        # Execute the direct call asynchronously with retry parameter
    -        await the_client.call_async(task, model, retry=retry)
    -        
    -        # Print the price ID summary if the task has a price ID
    -        if not task.not_main_task:
    -            print_price_id_summary(task.price_id, task)
    -            
    -        return task.response
    -
    -    @staticmethod
    -    def print_do(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        """
    -        Execute a direct LLM call and print the result.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (default: "openai/gpt-4")
    -            client: Optional custom client to use instead of creating a new one
    -            debug: Whether to enable debug mode
    -            retry: Number of retries for failed calls (default: 3)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    DirectStatic.print_do_async(task, model, client, debug, retry), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(DirectStatic.print_do_async(task, model, client, debug, retry))
    -
    -    @staticmethod
    -    async def print_do_async(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        """
    -        Execute a direct LLM call and print the result asynchronously.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (default: "openai/gpt-4")
    -            client: Optional custom client to use instead of creating a new one
    -            debug: Whether to enable debug mode
    -            retry: Number of retries for failed calls (default: 3)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        result = await DirectStatic.do_async(task, model, client, debug, retry)
    -        print(result)
    -        return result
    -
    -
    -class DirectInstance:
    -    """Instance-based class for making direct LLM calls using the Upsonic client."""
    -    
    -    def __init__(self, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        """
    -        Initialize a DirectInstance with specific model and client settings.
    -        
    -        Args:
    -            model: The LLM model to use (default: None)
    -            client: Optional custom client to use instead of creating a new one
    -            debug: Whether to enable debug mode
    -            retry: Number of retries for failed calls (default: 3)
    -        """
    -        self.model = model
    -        self.client = client
    -        self.debug = debug
    -        self.retry = retry
    -    
    -    def do(self, task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int | None = None):
    -        """
    -        Execute a direct LLM call using instance defaults or overrides.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (overrides instance default if provided)
    -            client: Optional custom client (overrides instance default if provided)
    -            debug: Whether to enable debug mode (overrides instance default if provided)
    -            retry: Number of retries for failed calls (overrides instance default if provided)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.do_async(task, model, client, debug, retry), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.do_async(task, model, client, debug, retry))
    -
    -    async def do_async(self, task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int | None = None):
    -        """
    -        Execute a direct LLM call using instance defaults or overrides asynchronously.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (overrides instance default if provided)
    -            client: Optional custom client (overrides instance default if provided)
    -            debug: Whether to enable debug mode (overrides instance default if provided)
    -            retry: Number of retries for failed calls (overrides instance default if provided)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        # Use provided parameters or instance defaults
    -        actual_model = model if model is not None else self.model
    -        actual_client = client if client is not None else self.client
    -        actual_debug = debug if debug is not False else self.debug
    -        actual_retry = retry if retry is not None else self.retry
    -        
    -        # Call the static method with the resolved parameters
    -        result = await DirectStatic.do_async(task, actual_model, actual_client, actual_debug, actual_retry)
    -        
    -        # No need to print price_id summary here since DirectStatic.do_async already does it
    -        return result
    -        
    -    def print_do(self, task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int | None = None):
    -        """
    -        Execute a direct LLM call and print the result.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (overrides instance default if provided)
    -            client: Optional custom client (overrides instance default if provided)
    -            debug: Whether to enable debug mode (overrides instance default if provided)
    -            retry: Number of retries for failed calls (overrides instance default if provided)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.print_do_async(task, model, client, debug, retry), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.print_do_async(task, model, client, debug, retry))
    -
    -    async def print_do_async(self, task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int | None = None):
    -        """
    -        Execute a direct LLM call and print the result asynchronously.
    -        
    -        Args:
    -            task: The task to execute
    -            model: The LLM model to use (overrides instance default if provided)
    -            client: Optional custom client (overrides instance default if provided)
    -            debug: Whether to enable debug mode (overrides instance default if provided)
    -            retry: Number of retries for failed calls (overrides instance default if provided)
    -            
    -        Returns:
    -            The response from the LLM
    -        """
    -        result = await self.do_async(task, model, client, debug, retry)
    -        print(result)
    -        return result
    -
    -
    -class Direct:
    -    """
    -    Router class that provides both static and instance-based approaches for direct LLM calls.
    -    
    -    When used without initialization, it provides static methods.
    -    When initialized with parameters, it returns an instance-based object.
    -    
    -    Example:
    -        # Correct usage with named model parameter:
    -        direct = Direct(model="openai/gpt-4o")
    -        direct = Direct(model="claude/claude-3-5-sonnet")
    -        
    -        # Incorrect usage:
    -        # direct = Direct("openai/gpt-4o")  # Wrong! Must use model=
    -        # direct = Direct("Researcher Direct")  # Wrong!
    -        
    -        # For agent-based operations, use Agent instead:
    -        # agent = Agent("Researcher Agent")  # Correct!
    -    """
    -    
    -    # Static methods that delegate to DirectStatic
    -    @staticmethod
    -    def do(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        return DirectStatic.do(task, model, client, debug, retry)
    -    
    -    @staticmethod
    -    def print_do(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        return DirectStatic.print_do(task, model, client, debug, retry)
    -
    -    @staticmethod
    -    async def do_async(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        return await DirectStatic.do_async(task, model, client, debug, retry)
    -    
    -    @staticmethod
    -    async def print_do_async(task: Task, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        return await DirectStatic.print_do_async(task, model, client, debug, retry)
    -    
    -    def __new__(cls, *args, model: ModelNames | None = None, client: Any = None, debug: bool = False, retry: int = 3):
    -        """
    -        Factory method that returns a DirectInstance object when initialized.
    -        
    -        Args:
    -            model: The LLM model to use (default: None)
    -            client: Optional custom client to use instead of creating a new one
    -            debug: Whether to enable debug mode
    -            retry: Number of retries for failed calls (default: 3)
    -            
    -        Returns:
    -            A DirectInstance object
    -            
    -        Raises:
    -            ValueError: If positional arguments are provided instead of using the named parameter 'model'
    -        """
    -        if args:
    -            raise ValueError(
    -                "Direct() does not accept positional arguments. Use named parameter 'model' instead.\n"
    -                "Example: Direct(model='openai/gpt-4o') instead of Direct('openai/gpt-4o')"
    -            )
    -            
    -        return DirectInstance(model, client, debug, retry)
    
  • src/upsonic/client/language.py+0 21 removed
    @@ -1,21 +0,0 @@
    -from .tasks.tasks import Task
    -from .direct_llm_call.direct_llm_cal import Direct
    -
    -from typing import Optional
    -
    -class Language:
    -    def __init__(self, language: str, task: Task, llm_model: str):
    -        self.language = language
    -        self.task = task
    -        self.llm_model = llm_model
    -
    -    async def transform(self):
    -        language_transformation_task = Task(
    -            f"User task is completed but we want to change the language of the task to {self.language}. Just return the translated result of task. Dont say or put anything to your return. Make one to one translation.",
    -            context=[self.task],
    -            response_format=self.task.response_format,
    -        )
    -
    -        direct = Direct(self.llm_model)
    -        await direct.do_async(language_transformation_task)
    -        return language_transformation_task.response
    
  • src/upsonic/client/latest_upsonic_client.py+0 1 removed
    @@ -1 +0,0 @@
    -latest_upsonic_client = None
    \ No newline at end of file
    
  • src/upsonic/client/level_one/call.py+0 268 removed
    @@ -1,268 +0,0 @@
    -import copy
    -import time
    -import cloudpickle
    -import asyncio
    -
    -from ..knowledge_base.knowledge_base import KnowledgeBase
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -
    -import dill
    -import base64
    -import httpx
    -from typing import Any, List, Dict, Optional, Type, Union
    -from pydantic import BaseModel
    -
    -from ..tasks.tasks import Task
    -
    -from ..printing import call_end
    -
    -
    -from ..tasks.task_response import ObjectResponse
    -
    -from ..language import Language
    -
    -from ..level_utilized.utility import context_serializer, response_format_serializer, tools_serializer, response_format_deserializer, error_handler
    -
    -class Call:
    -
    -
    -    def call(
    -        self,
    -        task: Union[Task, List[Task]],
    -        llm_model: str = None,
    -        retry: int = 3
    -    ) -> Any:
    -        
    -        start_time = time.time()
    -
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # If there's a running loop, run the async function in that loop
    -                    if isinstance(task, list):
    -                        for each in task:
    -                            the_result = asyncio.run_coroutine_threadsafe(self.call_async_(each, llm_model, retry), loop).result()
    -                            call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, each.price_id)
    -                    else:
    -                        the_result = asyncio.run_coroutine_threadsafe(self.call_async_(task, llm_model, retry), loop).result()
    -                        call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, task.price_id)
    -                else:
    -                    # If there's a loop but it's not running, use asyncio.run
    -                    if isinstance(task, list):
    -                        for each in task:
    -                            the_result = asyncio.run(self.call_async_(each, llm_model, retry))
    -                            call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, each.price_id)
    -                    else:
    -                        the_result = asyncio.run(self.call_async_(task, llm_model, retry))
    -                        call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, task.price_id)
    -            except RuntimeError:
    -                # No event loop exists, create one with asyncio.run
    -                if isinstance(task, list):
    -                    for each in task:
    -                        the_result = asyncio.run(self.call_async_(each, llm_model, retry))
    -                        call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, each.price_id)
    -                else:
    -                    the_result = asyncio.run(self.call_async_(task, llm_model, retry))
    -                    call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, task.price_id)
    -        except Exception as outer_e:
    -            try:
    -                from ...server import stop_dev_server, stop_main_server, is_tools_server_running, is_main_server_running
    -
    -                if is_tools_server_running() or is_main_server_running():
    -                    stop_dev_server()
    -
    -            except Exception:
    -                pass
    -
    -            raise outer_e
    -
    -        end_time = time.time()
    -
    -        return task.response
    -
    -    def call_(
    -        self,
    -        task: Task,
    -        llm_model: str = None,
    -        retry: int = 3
    -    ) -> Any:
    -        """
    -        Call GPT-4 with optional tools and MCP servers.
    -
    -        Args:
    -            prompt: The input prompt for GPT-4
    -            response_format: The expected response format (can be a type or Pydantic model)
    -            tools: Optional list of tool names to use
    -            retry: Number of retries for failed calls (default: 3)
    -
    -        Returns:
    -            The response in the specified format
    -        """
    -        # Try to get the current event loop
    -        try:
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the async function in that loop
    -                return asyncio.run_coroutine_threadsafe(self.call_async_(task, llm_model, retry), loop).result()
    -            else:
    -                # If there's a loop but it's not running, use asyncio.run
    -                return asyncio.run(self.call_async_(task, llm_model, retry))
    -        except RuntimeError:
    -            # No event loop exists, create one with asyncio.run
    -            return asyncio.run(self.call_async_(task, llm_model, retry))
    -
    -    async def call_async(
    -        self,
    -        task: Union[Task, List[Task]],
    -        llm_model: str = None,
    -        retry: int = 3
    -    ) -> Any:
    -        """
    -        Asynchronous version of the call method.
    -        """
    -        start_time = time.time()
    -
    -        try:
    -            if isinstance(task, list):
    -                for each in task:
    -                    the_result = await self.call_async_(each, llm_model, retry)
    -                    call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, each.price_id)
    -            else:
    -                the_result = await self.call_async_(task, llm_model, retry)
    -                call_end(the_result["result"], the_result["llm_model"], the_result["response_format"], start_time, time.time(), the_result["usage"], the_result["tool_usage"], self.debug, task.price_id)
    -        except Exception as outer_e:
    -            try:
    -                from ...server import stop_dev_server, stop_main_server, is_tools_server_running, is_main_server_running
    -
    -                if is_tools_server_running() or is_main_server_running():
    -                    stop_dev_server()
    -            except Exception:
    -                pass
    -            raise outer_e
    -
    -        end_time = time.time()
    -
    -        return task.response
    -
    -    async def call_async_(
    -        self,
    -        task: Task,
    -        llm_model: str = None,
    -        retry: int = 3
    -    ) -> Any:
    -        """
    -        Asynchronous version of the call_ method.
    -        """
    -        task.start_time = time.time()
    -        from ..trace import sentry_sdk
    -        from ..level_utilized.utility import CallErrorException
    -        
    -        # Use the provided model or default to the client's default
    -        if llm_model is None:
    -            llm_model = self.default_llm_model
    -            
    -        tools = tools_serializer(task.tools)
    -
    -        response_format = task.response_format
    -        with sentry_sdk.start_transaction(op="task", name="Call.call_async") as transaction:
    -            with sentry_sdk.start_span(op="serialize"):
    -                # Serialize the response format if it's a type or BaseModel
    -                response_format_str = response_format_serializer(task.response_format)
    -
    -                new_context = []
    -                if task.context:
    -                    for each in task.context:
    -                        if isinstance(each, KnowledgeBase):
    -                            if not each.rag:
    -                                new_context.append(each.markdown(self))
    -                        else:
    -                            new_context.append(each)
    -
    -                context = context_serializer(new_context, self)
    -
    -            with sentry_sdk.start_span(op="prepare_request"):
    -                # Prepare the request data
    -                data = {
    -                    "prompt": task.description + await task.additional_description(self), 
    -                    "images": task.images_base_64,
    -                    "response_format": response_format_str,
    -                    "tools": tools or [],
    -                    "context": context,
    -                    "llm_model": llm_model,
    -                    "system_prompt": None
    -                }
    -
    -            retry_count = 0
    -            while True:
    -                try:
    -                    with sentry_sdk.start_span(op="send_request"):
    -                        result = await self.send_request_async("/level_one/gpt4o", data)
    -                        original_result = result
    -                        
    -                        # Extract the tool_usage from the result['result'] before changing 'result'
    -                        tool_usage_value = []
    -                        if isinstance(result, dict) and 'result' in result and isinstance(result['result'], dict) and 'tool_usage' in result['result']:
    -                            tool_usage_value = result['result']['tool_usage']
    -                            
    -                            # Store tool calls in the task
    -                            for tool_call in tool_usage_value:
    -                                task.add_tool_call(tool_call)
    -                        
    -                        result = result["result"]
    -                        
    -                        if error_handler(result):  # If it's a retriable error
    -                            if retry > 0 and retry_count < retry:  # Check if retries are enabled and we can retry
    -                                retry_count += 1
    -                                from ..printing import agent_retry
    -                                agent_retry(retry_count, retry)
    -                                continue  # Try again
    -                            else:
    -                                raise CallErrorException(result)  # No more retries, raise the error
    -                        
    -                        break  # If no error or non-retriable error, break the loop
    -
    -                except Exception as e:
    -                    if retry > 0 and retry_count < retry:  # Check if retries are enabled and we can retry
    -                        retry_count += 1
    -                        from ..printing import agent_retry
    -                        agent_retry(retry_count, retry)
    -                        continue  # Try again
    -                    raise e  # No more retries, raise the error
    -
    -            with sentry_sdk.start_span(op="deserialize"):
    -                deserialized_result = response_format_deserializer(response_format_str, result)
    -
    -        task._response = deserialized_result["result"]
    -        
    -
    -        if task.response_lang:
    -            language = Language(task.response_lang, task, llm_model)
    -            processed_result = await language.transform()
    -            task._response = processed_result
    -
    -
    -        response_format_req = None
    -        if response_format_str == "str":
    -            response_format_req = response_format_str
    -        else:
    -            # Class name
    -            response_format_req = response_format.__name__
    -        
    -        task.end_time = time.time()
    -        
    -        # Make sure all necessary fields are extracted properly
    -        result_value = deserialized_result["result"]
    -        usage_value = deserialized_result.get("usage", {"input_tokens": 0, "output_tokens": 0})
    -        
    -        return {
    -            "result": result_value,
    -            "llm_model": llm_model,
    -            "response_format": response_format_req,
    -            "usage": usage_value,
    -            "tool_usage": tool_usage_value
    -        }
    -
    -
    -
    
  • src/upsonic/client/level_two/agent.py+0 706 removed
    @@ -1,706 +0,0 @@
    -import copy
    -import time
    -import cloudpickle
    -
    -from ..knowledge_base.knowledge_base import KnowledgeBase
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import dill
    -import base64
    -import httpx
    -import hashlib
    -from typing import Any, List, Dict, Optional, Type, Union, Literal
    -from pydantic import BaseModel
    -import uuid
    -
    -from ..tasks.tasks import Task
    -from ..direct_llm_call.direct_llm_cal import Direct
    -
    -from ..printing import agent_end, agent_total_cost, agent_retry, print_price_id_summary
    -
    -from ..tasks.task_response import ObjectResponse
    -
    -from ..agent_configuration.agent_configuration import AgentConfiguration
    -
    -from ..level_utilized.utility import context_serializer
    -
    -from ..level_utilized.utility import context_serializer, response_format_serializer, tools_serializer, response_format_deserializer, error_handler
    -
    -from ...storage.caching import save_to_cache_with_expiry, get_from_cache_with_expiry
    -
    -from ..tools.tools import Search
    -
    -from ...reliability_processor import ReliabilityProcessor
    -
    -from ..language import Language
    -
    -class SubTask(ObjectResponse):
    -    description: str
    -    sources_can_be_used: List[str]
    -    required_output: str
    -    tools: List[str]
    -class SubTaskList(ObjectResponse):
    -    sub_tasks: List[SubTask]
    -
    -class AgentMode(ObjectResponse):
    -    """Mode selection for task decomposition"""
    -    selected_mode: Literal["level_no_step", "level_one"]
    -
    -class SearchResult(ObjectResponse):
    -    any_customers: bool
    -    products: List[str]
    -    services: List[str]
    -    potential_competitors: List[str]
    -class CompanyObjective(ObjectResponse):
    -    objective: str
    -    goals: List[str]
    -    state: str
    -class HumanObjective(ObjectResponse):
    -    job_title: str
    -    job_description: str
    -    job_goals: List[str]
    -    
    -class Characterization(ObjectResponse):
    -    website_content: Union[SearchResult, None]
    -    company_objective: Union[CompanyObjective, None]
    -    human_objective: Union[HumanObjective, None]
    -    name_of_the_human_of_tasks: str = None
    -    contact_of_the_human_of_tasks: str = None
    -
    -class OtherTask(ObjectResponse):
    -    task: str
    -    result: Any
    -
    -class Agent:
    -
    -    def agent_(
    -        self,
    -        agent_configuration: AgentConfiguration,
    -        task: Task,
    -        llm_model: str = None,
    -    ) -> Any:
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.agent_async_(agent_configuration, task, llm_model), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.agent_async_(agent_configuration, task, llm_model))
    -
    -    def send_agent_request(
    -        self,
    -        agent_configuration: AgentConfiguration,
    -        task: Task,
    -        llm_model: str = None,
    -    ) -> Any:
    -        from ..trace import sentry_sdk
    -        from ..level_utilized.utility import CallErrorException
    -        """
    -        Call GPT-4 with optional tools and MCP servers.
    -
    -        Args:
    -            prompt: The input prompt for GPT-4
    -            response_format: The expected response format (can be a type or Pydantic model)
    -            tools: Optional list of tool names to use
    -
    -
    -        Returns:
    -            The response in the specified format
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.send_agent_request_async(agent_configuration, task, llm_model), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.send_agent_request_async(agent_configuration, task, llm_model))
    -
    -    def create_characterization(self, agent_configuration: AgentConfiguration, llm_model: str = None, price_id: str = None):
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.create_characterization_async(agent_configuration, llm_model, price_id), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.create_characterization_async(agent_configuration, llm_model, price_id))
    -
    -    def agent(self, agent_configuration: AgentConfiguration, task: Task,  llm_model: str = None):
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.agent_async(agent_configuration, task, llm_model), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.agent_async(agent_configuration, task, llm_model))
    -
    -    def multiple(self, agent_configuration: AgentConfiguration, task: Task, llm_model: str = None):
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.multiple_async(agent_configuration, task, llm_model), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.multiple_async(agent_configuration, task, llm_model))
    -
    -
    -
    -    async def agent_async(self, agent_configuration: AgentConfiguration, task: Task, llm_model: str = None):
    -        """
    -        Asynchronous version of the agent method.
    -        """
    -        original_task = task
    -        original_task.start_time = time.time()
    -        
    -        if llm_model is None:
    -            llm_model = agent_configuration.model
    -
    -        copy_agent_configuration = copy.deepcopy(agent_configuration)
    -        copy_agent_configuration_json = copy_agent_configuration.model_dump_json(include={"job_title", "company_url", "company_objective", "name", "contact"})
    -        
    -        the_characterization_cache_key = f"characterization_{hashlib.sha256(copy_agent_configuration_json.encode()).hexdigest()}"
    -
    -        if agent_configuration.system_prompt:
    -            the_characterization = agent_configuration.system_prompt
    -        elif llm_model and llm_model.startswith("ollama"):
    -            the_characterization = agent_configuration.system_prompt if agent_configuration.system_prompt else agent_configuration.name
    -        elif agent_configuration.caching:
    -            the_characterization = get_from_cache_with_expiry(the_characterization_cache_key)
    -            if the_characterization is None:
    -                the_characterization = await self.create_characterization_async(agent_configuration, llm_model, task.price_id)
    -                save_to_cache_with_expiry(the_characterization, the_characterization_cache_key, agent_configuration.cache_expiry)
    -        else:
    -            the_characterization = await self.create_characterization_async(agent_configuration, llm_model, task.price_id)
    -
    -        knowledge_base = None
    -        if agent_configuration.knowledge_base:
    -            knowledge_base = agent_configuration.knowledge_base
    -        
    -        the_task = task
    -        is_it_sub_task = False
    -        shared_context = []
    -
    -        if agent_configuration.sub_task:
    -            # Create a new agent configuration for sub-tasks with memory enabled and same retry setting
    -            sub_task_agent_config = copy.deepcopy(agent_configuration)
    -            sub_task_agent_config.agent_id_ = str(uuid.uuid4())  # Generate new agent ID for sub-tasks
    -            sub_task_agent_config.memory = True  # Enable memory for sub-tasks
    -            
    -            # Use the async version of multiple
    -            sub_tasks = await self.multiple_async(sub_task_agent_config, task, llm_model)
    -            is_it_sub_task = True
    -            the_task = sub_tasks
    -
    -        if not isinstance(the_task, list):
    -            the_task = [the_task]
    -
    -        for each in the_task:
    -            if not isinstance(each.context, list):
    -                each.context = [each.context]
    -
    -        last_task = []
    -        for each in the_task:
    -            if isinstance(each.context, list):
    -                last_task.append(each)
    -        the_task = last_task
    -
    -        for each in the_task:
    -            each.context.append(the_characterization)
    -
    -        # Add knowledge base to the context for each task
    -        if knowledge_base:
    -            if isinstance(the_task, list):
    -                for each in the_task:
    -                    if each.context:
    -                        each.context.append(knowledge_base)
    -                    else:
    -                        each.context = [knowledge_base]
    -
    -        if task.context:
    -            for each in the_task:
    -                each.context += task.context
    -
    -        # Create copies of agent_configuration for all tasks except the last one
    -        task_specific_configs = []
    -        for i in range(len(the_task)):
    -            if i < len(the_task) - 1:
    -                # Create a copy and set reliability_layer to None for all except last task
    -                task_config = copy.deepcopy(sub_task_agent_config if agent_configuration.sub_task else agent_configuration)
    -                task_config.reliability_layer = None
    -                task_specific_configs.append(task_config)
    -            else:
    -                # Use original config for the last task
    -                task_specific_configs.append(agent_configuration)
    -
    -        if agent_configuration.tools:
    -            if isinstance(the_task, list):
    -                for each in the_task:
    -                    each.tools = agent_configuration.tools
    -
    -        results = []    
    -        if isinstance(the_task, list):
    -            for i, each in enumerate(the_task):
    -                if is_it_sub_task:
    -                    if shared_context:
    -                        each.context += shared_context
    -
    -                # Use the async version of agent_
    -                result = await self.agent_async_(task_specific_configs[i], each, llm_model=llm_model)
    -                results += result
    -
    -                # Collect tool calls from each subtask
    -                for tool_call in each.tool_calls:
    -                    original_task.add_tool_call(tool_call)
    -
    -                if is_it_sub_task:
    -                    shared_context.append(OtherTask(task=each.description, result=each.response))
    -
    -        original_task._response = the_task[-1].response
    -        
    -        total_time = 0
    -        for each in results:
    -            total_time += each["time"]
    -
    -        total_input_tokens = 0
    -        total_output_tokens = 0
    -        for each in results:
    -            if "usage" in each and each["usage"] is not None:
    -                total_input_tokens += each["usage"].get("input_tokens", 0)
    -                total_output_tokens += each["usage"].get("output_tokens", 0)
    -
    -        the_llm_model = llm_model
    -        if the_llm_model is None:
    -            the_llm_model = self.default_llm_model
    -
    -        agent_total_cost(total_input_tokens, total_output_tokens, total_time, the_llm_model)
    -
    -        if not original_task.not_main_task:
    -            print_price_id_summary(original_task.price_id, original_task)
    -
    -        original_task.end_time = time.time()
    -        return original_task.response
    -
    -    async def agent_async_(
    -        self,
    -        agent_configuration: AgentConfiguration,
    -        task: Task,
    -        llm_model: str = None,
    -    ) -> Any:
    -        """
    -        Asynchronous version of agent_ method.
    -        """
    -        start_time = time.time()
    -        results = []
    -
    -        try:
    -            if isinstance(task, list):
    -                for each in task:
    -                    the_result = await self.send_agent_request_async(agent_configuration, each, llm_model)
    -                    the_result["time"] = time.time() - start_time
    -                    results.append(the_result)
    -                    agent_end(the_result["result"], the_result["llm_model"], the_result["response_format"], 
    -                             start_time, time.time(), the_result["usage"], the_result["tool_usage"], the_result["tool_count"], 
    -                             the_result["context_count"], self.debug, each.price_id)
    -            else:
    -                the_result = await self.send_agent_request_async(agent_configuration, task, llm_model)
    -                the_result["time"] = time.time() - start_time
    -                results.append(the_result)
    -                agent_end(the_result["result"], the_result["llm_model"], the_result["response_format"], 
    -                         start_time, time.time(), the_result["usage"], the_result["tool_usage"], the_result["tool_count"], 
    -                         the_result["context_count"], self.debug, task.price_id)
    -        except Exception as outer_e:
    -            try:
    -                from ...server import stop_dev_server, stop_main_server, is_tools_server_running, is_main_server_running
    -                if is_tools_server_running() or is_main_server_running():
    -                    stop_dev_server()
    -            except Exception:
    -                pass
    -            raise outer_e
    -
    -        end_time = time.time()
    -
    -        return results
    -
    -    async def send_agent_request_async(
    -        self,
    -        agent_configuration: AgentConfiguration,
    -        task: Task,
    -        llm_model: str = None,
    -    ) -> Any:
    -        """
    -        Asynchronous version of send_agent_request method.
    -        """
    -        from ..trace import sentry_sdk
    -        from ..level_utilized.utility import CallErrorException
    -
    -        if llm_model is None:
    -            llm_model = self.default_llm_model
    -
    -        tools = tools_serializer(task.tools)
    -        response_format = task.response_format
    -        
    -        with sentry_sdk.start_transaction(op="task", name="Agent.send_agent_request_async") as transaction:
    -            with sentry_sdk.start_span(op="serialize"):
    -                # Serialize the response format if it's a type or BaseModel
    -                response_format_str = response_format_serializer(task.response_format)
    -
    -            new_context = []
    -            if task.context:
    -                for each in task.context:
    -                    if isinstance(each, KnowledgeBase):
    -                        if not each.rag:
    -                            new_context.append(each.markdown(self))
    -                    else:
    -                        new_context.append(each)
    -
    -                context = context_serializer(new_context, self)
    -            else:
    -                context = None
    -
    -            with sentry_sdk.start_span(op="prepare_request"):
    -                # Prepare the request data
    -                data = {
    -                    "agent_id": agent_configuration.agent_id,
    -                    "prompt": task.description + await task.additional_description(self), 
    -                    "images": task.images_base_64,
    -                    "response_format": response_format_str,
    -                    "tools": tools or [],
    -                    "context": context,
    -                    "llm_model": llm_model,
    -                    "system_prompt": None,
    -                    "context_compress": agent_configuration.context_compress,
    -                    "memory": agent_configuration.memory
    -                }
    -
    -            retry_count = 0
    -            while True:
    -                try:
    -                    with sentry_sdk.start_span(op="send_request"):
    -                        # Send the request asynchronously
    -                        result = await self.send_request_async("/level_two/agent", data)
    -                        result = result["result"]
    -                        
    -                        # Store tool calls in the task if available
    -                        if isinstance(result, dict) and 'tool_usage' in result:
    -                            for tool_call in result['tool_usage']:
    -                                task.add_tool_call(tool_call)
    -                        
    -                        if error_handler(result):  # If it's a retriable error
    -                            if agent_configuration.retry > 0 and retry_count < agent_configuration.retry:  # Check if retries are enabled and we can retry
    -                                retry_count += 1
    -                                from ..printing import agent_retry
    -                                agent_retry(retry_count, agent_configuration.retry)
    -                                continue  # Try again
    -                            else:
    -                                raise CallErrorException(result)  # No more retries, raise the error
    -                        
    -                        break  # If no error or non-retriable error, break the loop
    -
    -                except Exception as e:
    -                    if agent_configuration.retry > 0 and retry_count < agent_configuration.retry:  # Check if retries are enabled and we can retry
    -                        retry_count += 1
    -                        from ..printing import agent_retry
    -                        agent_retry(retry_count, agent_configuration.retry)
    -                        continue  # Try again
    -                    raise e  # No more retries, raise the error
    -
    -            with sentry_sdk.start_span(op="deserialize"):
    -                deserialized_result = response_format_deserializer(response_format_str, result)
    -
    -            # Process result through reliability layer
    -            processed_result = await ReliabilityProcessor.process_result(
    -                deserialized_result["result"], 
    -                agent_configuration.reliability_layer,
    -                task,
    -                llm_model
    -            )
    -            task._response = processed_result
    -
    -            if task.response_lang:
    -                language = Language(task.response_lang, task, llm_model)
    -                processed_result = await language.transform()
    -                task._response = processed_result
    -
    -            response_format_req = None
    -            if response_format_str == "str":
    -                response_format_req = response_format_str
    -            else:
    -                # Class name
    -                response_format_req = response_format.__name__
    -            
    -            if context is None:
    -                context = []
    -
    -            len_of_context = len(task.context) if task.context is not None else 0
    -
    -            return {
    -                "result": processed_result, 
    -                "llm_model": llm_model, 
    -                "response_format": response_format_req, 
    -                "usage": deserialized_result["usage"],
    -                "tool_usage": deserialized_result["tool_usage"],
    -                "tool_count": len(tools), 
    -                "context_count": len_of_context
    -            }
    -
    -    async def create_characterization_async(self, agent_configuration: AgentConfiguration, llm_model: str = None, price_id: str = None):
    -        tools = [Search]
    -
    -        search_task = None
    -        search_result = None
    -        if agent_configuration.company_url:
    -            search_task = Task(description=f"Make a search for {agent_configuration.company_url}", tools=tools, response_format=SearchResult, price_id_=price_id, not_main_task=True)
    -            await Direct.do_async(search_task, llm_model, retry=agent_configuration.retry, client=agent_configuration.client)
    -            search_result = search_task.response
    -
    -        company_objective_task = None
    -        company_objective_result = None
    -        if agent_configuration.company_objective:
    -            context = [search_task] if search_task else None
    -            company_objective_task = Task(description=f"Generate the company objective for {agent_configuration.company_objective}", 
    -                                        tools=tools, 
    -                                        response_format=CompanyObjective,
    -                                        context=context,
    -                                        price_id_=price_id,
    -                                        not_main_task=True)
    -            await Direct.do_async(company_objective_task, llm_model, retry=agent_configuration.retry, client=agent_configuration.client)
    -            company_objective_result = company_objective_task.response
    -
    -        human_objective_result = None
    -        # Handle human objective if job title is provided
    -        if agent_configuration.job_title:
    -            context = []
    -            if search_task:
    -                context.append(search_task)
    -            if company_objective_task:
    -                context.append(company_objective_task)
    -            
    -            context = context if context else None
    -            human_objective_task = Task(description=f"Generate the human objective for {agent_configuration.job_title}", 
    -                                      tools=tools, 
    -                                      response_format=HumanObjective,
    -                                      context=context,
    -                                      price_id_=price_id,
    -                                      not_main_task=True)
    -            await Direct.do_async(human_objective_task, llm_model, retry=agent_configuration.retry, client=agent_configuration.client)
    -            human_objective_result = human_objective_task.response
    -
    -        total_character = Characterization(
    -            website_content=search_result,
    -            company_objective=company_objective_result,
    -            human_objective=human_objective_result,
    -            name_of_the_human_of_tasks=agent_configuration.name,
    -            contact_of_the_human_of_tasks=agent_configuration.contact
    -        )
    -
    -        return total_character
    -
    -    async def call_async(self, task: Task, llm_model: str = None):
    -        """
    -        Asynchronous version of the call method.
    -        """
    -        if llm_model is None:
    -            llm_model = self.default_llm_model
    -
    -
    -        result = await self.send_agent_request_async(AgentConfiguration(), task, llm_model)
    -        task._response = result["result"]
    -        return task.response
    -
    -    async def multiple_async(self, agent_configuration: AgentConfiguration, task: Task, llm_model: str = None):
    -        """
    -        Asynchronous version of the multiple method.
    -        """
    -        if agent_configuration.system_prompt:
    -            system_prompt = "System prompt: " + agent_configuration.system_prompt
    -        else:
    -            system_prompt = None
    -        # First, determine the mode of operation
    -        mode_selection_prompt = f"""
    -You are a Task Analysis AI that helps determine the best mode of task decomposition.
    -
    -Task Agent name: {agent_configuration.job_title}
    -{system_prompt}
    -
    -Given task: "{task.description}"
    -
    -Analyze the task characteristics:
    -
    -Level No Step (Direct Execution) is suitable for:
    -- Tasks that can be completed in a single, atomic operation
    -- Tasks where the output format is simple and well-defined
    -- Tasks that don't require setup or configuration
    -- Tasks where AI can directly generate the complete result
    -- Tasks without dependencies or external integrations
    -Examples:
    -- Simple data transformations
    -- Direct text generation
    -- Single API call operations
    -- Basic calculations or conversions
    -
    -Level One (Basic Decomposition) is suitable for:
    -- Tasks requiring multiple steps or verifications
    -- Tasks with clear, linear steps
    -- Tasks needing external information or resources
    -- Tasks requiring setup or configuration
    -- Tasks involving API integrations or data processing
    -- Tasks that need error handling
    -- Information retrieval and verification tasks
    -Examples of Level One Tasks:
    -- Finding and verifying documentation
    -- Implementation tasks with clear steps
    -- Multi-step data processing
    -- Tasks requiring setup and configuration
    -- Tasks involving API usage
    -- Tasks needing error handling
    -- Tasks that follow a linear sequence of steps
    -
    -Select the mode based on these characteristics.
    -Prefer level_no_step when the task can be completed directly without any decomposition.
    -Use Level One for any task requiring multiple steps or verification.
    -"""
    -        mode_selector = Task(
    -            description=mode_selection_prompt,
    -            images=task.images,
    -            response_format=AgentMode,
    -            context=[task],
    -            price_id_=task.price_id,
    -            not_main_task=True
    -        )
    -        
    -        # Use Direct.do_async with the agent's retry setting
    -        await Direct.do_async(mode_selector, llm_model, retry=agent_configuration.retry, client=agent_configuration.client)
    -        
    -        # If level_no_step is selected, return just the end task
    -        if mode_selector.response.selected_mode == "level_no_step":
    -            return [Task(description=task.description, images=task.images, response_format=task.response_format, response_lang=task.response_lang, tools=task.tools, price_id_=task.price_id, not_main_task=True)]
    -
    -        # Generate a list of sub tasks
    -        prompt = f"""
    -You are a Task Decomposition AI that helps break down large tasks into smaller, manageable subtasks.
    -
    -Task Agent name: {agent_configuration.job_title}
    -{system_prompt}
    -
    -Given task: "{task.description}"
    -Available tools: {task.tools if task.tools else "No tools available"}
    -
    -Tool Dependency Guidelines:
    -- File Operations: Tasks involving file reading, writing, or manipulation require file system tools
    -- Terminal Operations: Tasks requiring command execution need terminal access tools
    -- Web Operations: Tasks involving web searches or API calls need web access tools
    -- System Operations: Tasks involving system configuration or environment setup need system tools
    -
    -Task Decomposition Rules:
    -1. Only create subtasks that can be completed with the available tools
    -2. Skip any operations that would require unavailable tools
    -3. Each subtask must be achievable with the given tool set
    -4. If a critical operation cannot be performed due to missing tools, note it in the task description
    -5. Adapt the approach based on available tools rather than assuming tool availability
    -
    -General Task Rules:
    -1. Each subtask should be clear, specific, and actionable
    -2. Subtasks should be ordered in a logical sequence
    -3. Each subtask should be necessary for completing the main task
    -4. Avoid overly broad or vague subtasks
    -5. Keep subtasks at a similar level of granularity
    -
    -Tool Availability Impact:
    -- Without file system tools: Skip file operations
    -- Without terminal tools: Avoid command execution tasks
    -- Without web tools: Skip online searches, API calls
    -- Without system tools: Avoid system configuration tasks
    -"""
    -        sub_tasker_context = [task, task.response_format]
    -        if task.context:
    -            sub_tasker_context = task.context
    -        sub_tasker = Task(description=prompt, images=task.images, response_format=SubTaskList, context=sub_tasker_context, tools=task.tools, price_id_=task.price_id, not_main_task=True)
    -
    -        # Use Direct.do_async with the agent's retry setting
    -        await Direct.do_async(sub_tasker, llm_model, retry=agent_configuration.retry, client=agent_configuration.client)
    -
    -        sub_tasks = []
    -
    -        # Create tasks from subtasks
    -        for each in sub_tasker.response.sub_tasks:
    -            new_task = Task(description=each.description + " " + each.required_output + " " + str(each.sources_can_be_used) + " " + str(each.tools) + "Focus to complete the task with right result, Dont ask to human directly do it and give the result.", images=task.images, price_id_=task.price_id, not_main_task=True)
    -            new_task.tools = task.tools
    -            sub_tasks.append(new_task)
    -
    -        # Add the final task that will produce the original desired response format
    -        end_task = Task(description=task.description, images=task.images, response_format=task.response_format, response_lang=task.response_lang, price_id_=task.price_id, not_main_task=True)
    -        sub_tasks.append(end_task)
    -
    -        return sub_tasks
    -
    -    def call(self, task: Task, llm_model: str = None):
    -        """
    -        Synchronous version of the call method that uses the async version internally.
    -        """
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.call_async(task, llm_model), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.call_async(task, llm_model))
    -
    -
    
  • src/upsonic/client/level_utilized/utility.py+0 142 removed
    @@ -1,142 +0,0 @@
    -import copy
    -from datetime import datetime
    -import dill
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -
    -from pydantic import BaseModel
    -from ..knowledge_base.knowledge_base import KnowledgeBase
    -from ..printing import error_message
    -from ...exception import (
    -    NoAPIKeyException,
    -    ContextWindowTooSmallException,
    -    InvalidRequestException,
    -    UnsupportedLLMModelException,
    -    UnsupportedComputerUseModelException,
    -    CallErrorException
    -)
    -
    -
    -def serialize_context(context, client):
    -    if isinstance(context, KnowledgeBase):
    -        context = context.markdown(client)
    -    
    -    return context
    -
    -def context_serializer(context, client):
    -
    -    if context is None:
    -        context = []
    -
    -
    -    copy_of_context = copy.deepcopy(context)
    -
    -    if not isinstance(copy_of_context, list):
    -        copy_of_context = [copy_of_context]
    -    
    -
    -    # Adding current date time to the context
    -    copy_of_context.append(f"Current date and time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
    -
    -    for i, each in enumerate(copy_of_context):
    -            try:
    -                each.tools = []
    -            except:
    -                pass
    -            try:
    -                each.response_format = None
    -            except:
    -                pass
    -
    -            copy_of_context[i] = serialize_context(each, client)
    -
    -
    -    the_module = dill.detect.getmodule(copy_of_context)
    -    if the_module is not None:
    -        cloudpickle.register_pickle_by_value(the_module)
    -
    -
    -    pickled_context = cloudpickle.dumps(copy_of_context)
    -    context = base64.b64encode(pickled_context).decode("utf-8")
    -
    -
    -    return context
    -
    -
    -
    -def response_format_serializer(response_format):
    -    if response_format is None:
    -        response_format_str = "str"
    -    elif isinstance(response_format, (type, BaseModel)):
    -        # If it's a Pydantic model or other type, cloudpickle and base64 encode it
    -        the_module = dill.detect.getmodule(response_format)
    -        if the_module is not None:
    -            cloudpickle.register_pickle_by_value(the_module)
    -        pickled_format = cloudpickle.dumps(response_format)
    -        response_format_str = base64.b64encode(pickled_format).decode("utf-8")
    -    else:
    -        response_format_str = "str"
    -
    -    return response_format_str
    -
    -
    -def response_format_deserializer(response_format_str, result):
    -    if response_format_str != "str":
    -        decoded_result = base64.b64decode(result["result"])
    -        deserialized_result = cloudpickle.loads(decoded_result)
    -    else:
    -        deserialized_result = result["result"]
    -
    -    result["result"] = deserialized_result
    -
    -    return result
    -
    -
    -def tools_serializer(tools_):
    -    tools = []
    -    for i in tools_:
    -
    -
    -        if isinstance(i, type):
    -
    -            tools.append(i.__name__+".*")
    -        # If its a string, get the name of the string
    -        elif isinstance(i, str):
    -
    -            tools.append(i)
    -        elif isinstance(i, object):
    -            sub_i = i.__class__
    -            tools.append(sub_i.__name__+".*")
    -    return tools
    -
    -
    -
    -def error_handler(result):
    -    if result["status_code"] == 401:
    -        error_message("API Key Error", result["detail"], 401)
    -        raise NoAPIKeyException(result["detail"])
    -    
    -    if result["status_code"] == 402:
    -        error_message("Context Window Error", result["detail"], 402)
    -        raise ContextWindowTooSmallException(result["detail"])
    -
    -    if result["status_code"] == 403:
    -        error_message("Invalid Request", result["detail"], 403)
    -        raise InvalidRequestException(result["detail"])
    -
    -    if result["status_code"] == 400:
    -        error_message("Unsupported Model", result["detail"], 400)
    -        raise UnsupportedLLMModelException(result["detail"])
    -
    -    if result["status_code"] == 405:
    -        error_message("Unsupported Computer Use Model", result["detail"], 405)
    -        raise UnsupportedComputerUseModelException(result["detail"])
    -
    -    if result["status_code"] == 500:
    -        # Extract meaningful message from the error if available
    -        error_detail = result.get("message", str(result))
    -        error_message("Call Error", error_detail, 500)
    -        return True  # Indicate this is a retriable error
    -        
    -    return False  # Not a retriable error
    \ No newline at end of file
    
  • src/upsonic/client/markdown/markdown.py+0 46 removed
    @@ -1,46 +0,0 @@
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import dill
    -import base64
    -import httpx
    -from typing import Any, List, Dict, Optional, Type, Union
    -from pydantic import BaseModel
    -import os
    -import tempfile
    -
    -
    -class Markdown:
    -    def markdown(self, file_path: str) -> str:
    -        """
    -        Upload a file and convert it to markdown.
    -
    -        Args:
    -            file_path: Path to the file to convert or a URL to download and convert
    -
    -        Returns:
    -            The markdown content
    -        """
    -        if file_path.startswith("http"):
    -            # Download file
    -            response = httpx.get(file_path)
    -            response.raise_for_status()
    -
    -            # Save to temporary .html file
    -            fd, tmp_path = tempfile.mkstemp(suffix=".html")
    -            os.write(fd, response.content)
    -            os.close(fd)
    -
    -            file_path = tmp_path
    -
    -        if not os.path.exists(file_path):
    -            raise FileNotFoundError(f"File not found: {file_path}")
    -
    -        # Read the file and prepare for upload
    -        with open(file_path, "rb") as f:
    -            files = {"file": (os.path.basename(file_path), f)}
    -            response = self.send_request("/markdown/upload", {}, files=files)
    -            
    -        if file_path.startswith("/tmp"):
    -            os.remove(file_path)  # Delete temporary HTML file
    -
    -        return response.get("markdown")
    
  • src/upsonic/client/others/others.py+0 47 removed
    @@ -1,47 +0,0 @@
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import dill
    -import base64
    -import httpx
    -from typing import Any, List, Dict, Optional, Type, Union
    -from pydantic import BaseModel
    -import os
    -import tempfile
    -
    -from io import BytesIO
    -
    -
    -class Others:
    -    def screenshot(self, show: bool = True, save_path: Optional[str] = None) -> Optional[bytes]:
    -        import matplotlib.pyplot as plt
    -        import matplotlib.image as mpimg
    -        """
    -        Take a screenshot using the server and optionally display it or save it.
    -
    -        Args:
    -            show: Whether to display the screenshot using matplotlib
    -            save_path: Optional path to save the screenshot
    -
    -        Returns:
    -            The screenshot bytes if save_path is not provided
    -        """
    -        # Get the screenshot from the server
    -        response = self.send_request("/others/take_screenshot", {}, method="GET", return_raw=True)
    -        
    -        if save_path:
    -            # Save the screenshot to the specified path
    -            with open(save_path, 'wb') as f:
    -                f.write(response)
    -        
    -        if show:
    -            # Display the screenshot using matplotlib
    -            img = mpimg.imread(BytesIO(response))
    -            plt.figure(figsize=(15, 10))
    -            plt.axis('off')
    -            plt.imshow(img)
    -            plt.show()
    -        
    -        if not save_path:
    -            return response
    -        
    -        return None
    
  • src/upsonic/client/price.py+0 4 removed
    @@ -1,4 +0,0 @@
    -from ..model_registry import get_estimated_cost
    -
    -# The pricing data and all related functionality has been moved to model_registry.py
    -# This file provides backward compatibility for existing imports 
    \ No newline at end of file
    
  • src/upsonic/client/storage/storage.py+0 210 removed
    @@ -1,210 +0,0 @@
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import dill
    -import base64
    -import httpx
    -import os
    -import asyncio
    -from typing import Any, List, Dict, Optional, Type, Union
    -from pydantic import BaseModel, Field
    -
    -
    -from dotenv import load_dotenv
    -load_dotenv(os.path.join(os.getcwd(), ".env"))
    -
    -
    -class ClientConfig(BaseModel):
    -    DEFAULT_LLM_MODEL: str = Field(default="openai/gpt-4o")
    -    
    -    OPENAI_API_KEY: str | None = Field(default_factory=lambda: os.getenv("OPENAI_API_KEY"))
    -
    -    ANTHROPIC_API_KEY: str | None = Field(default_factory=lambda: os.getenv("ANTHROPIC_API_KEY"))
    -    
    -    AZURE_OPENAI_ENDPOINT: str | None = Field(default_factory=lambda: os.getenv("AZURE_OPENAI_ENDPOINT"))
    -    AZURE_OPENAI_API_VERSION: str | None = Field(default_factory=lambda: os.getenv("AZURE_OPENAI_API_VERSION"))
    -    AZURE_OPENAI_API_KEY: str | None = Field(default_factory=lambda: os.getenv("AZURE_OPENAI_API_KEY"))
    -    
    -    AWS_ACCESS_KEY_ID: str | None = Field(default_factory=lambda: os.getenv("AWS_ACCESS_KEY_ID"))
    -    AWS_SECRET_ACCESS_KEY: str | None = Field(default_factory=lambda: os.getenv("AWS_SECRET_ACCESS_KEY"))
    -    AWS_REGION: str | None = Field(default_factory=lambda: os.getenv("AWS_REGION"))
    -
    -    DEEPSEEK_API_KEY: str | None = Field(default_factory=lambda: os.getenv("DEEPSEEK_API_KEY"))
    -
    -    GOOGLE_GLA_API_KEY: str | None = Field(default_factory=lambda: os.getenv("GOOGLE_GLA_API_KEY"))
    -    
    -    OPENROUTER_API_KEY: str | None = Field(default_factory=lambda: os.getenv("OPENROUTER_API_KEY"))
    -
    -
    -
    -class Storage:
    -
    -
    -
    -    def get_config(self, key: str) -> Any:
    -        """
    -        Get a configuration value by key from the server.
    -
    -        Args:
    -            key: The configuration key
    -
    -        Returns:
    -            The configuration value
    -        """
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    from ..base import run_coroutine_in_new_thread
    -                    return run_coroutine_in_new_thread(self.get_config_async(key))
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.get_config_async(key))
    -        except Exception as e:
    -            raise e
    -
    -    async def get_config_async(self, key: str) -> Any:
    -        """
    -        Get a configuration value by key from the server asynchronously.
    -
    -        Args:
    -            key: The configuration key
    -
    -        Returns:
    -            The configuration value
    -        """
    -        from ..trace import sentry_sdk
    -        with sentry_sdk.start_transaction(op="task", name="Storage.get_config_async") as transaction:
    -            with sentry_sdk.start_span(op="send_request_async"):
    -                data = {"key": key}
    -                response = await self.send_request_async("/storage/config/get", data=data)
    -            return response.get("value")
    -
    -    def set_config(self, key: str, value: str) -> str:
    -        """
    -        Set a configuration value on the server.
    -
    -        Args:
    -            key: The configuration key
    -            value: The configuration value
    -
    -        Returns:
    -            A success message
    -        """
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    from ..base import run_coroutine_in_new_thread
    -                    return run_coroutine_in_new_thread(self.set_config_async(key, value))
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.set_config_async(key, value))
    -        except Exception as e:
    -            raise e
    -
    -    async def set_config_async(self, key: str, value: str) -> str:
    -        """
    -        Set a configuration value on the server asynchronously.
    -
    -        Args:
    -            key: The configuration key
    -            value: The configuration value
    -
    -        Returns:
    -            A success message
    -        """
    -        from ..trace import sentry_sdk
    -        with sentry_sdk.start_transaction(op="task", name="Storage.set_config_async") as transaction:
    -            with sentry_sdk.start_span(op="send_request_async"):
    -                data = {"key": key, "value": value}
    -                response = await self.send_request_async("/storage/config/set", data=data)
    -            return response.get("message")
    -
    -    def bulk_set_config(self, configs: Dict[str, str]) -> str:
    -        """
    -        Set multiple configuration values on the server at once.
    -
    -        Args:
    -            configs: Dictionary of configuration key-value pairs
    -
    -        Returns:
    -            A success message
    -        """
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    from ..base import run_coroutine_in_new_thread
    -                    return run_coroutine_in_new_thread(self.bulk_set_config_async(configs))
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.bulk_set_config_async(configs))
    -        except Exception as e:
    -            raise e
    -
    -    async def bulk_set_config_async(self, configs: Dict[str, str]) -> str:
    -        """
    -        Set multiple configuration values on the server at once asynchronously.
    -
    -        Args:
    -            configs: Dictionary of configuration key-value pairs
    -
    -        Returns:
    -            A success message
    -        """
    -        data = {"configs": configs}
    -        response = await self.send_request_async("/storage/config/bulk_set", data=data)
    -        return response.get("message")
    -
    -    def set_default_llm_model(self, llm_model: str):
    -        self.default_llm_model = llm_model
    -
    -    def config(self, config: ClientConfig):
    -        """
    -        Configure the client.
    -        
    -        Args:
    -            config: ClientConfig object with configuration values
    -        """
    -        try:
    -            # Try to get the current event loop
    -            try:
    -                loop = asyncio.get_running_loop()
    -                if loop.is_running():
    -                    # We're in an async context, but this is a sync method
    -                    # We need to run the async method in a new thread
    -                    from ..base import run_coroutine_in_new_thread
    -                    return run_coroutine_in_new_thread(self.config_async(config))
    -            except RuntimeError:
    -                # No event loop is running, use asyncio.run
    -                return asyncio.run(self.config_async(config))
    -        except Exception as e:
    -            raise e
    -
    -    async def config_async(self, config: ClientConfig):
    -        """
    -        Configure the client asynchronously.
    -        
    -        Args:
    -            config: ClientConfig object with configuration values
    -        """
    -        # Create a dictionary of non-None values excluding default_llm_model
    -        config_dict = {
    -            key: str(value) for key, value in config.model_dump().items() 
    -            if key != "DEFAULT_LLM_MODEL" and value is not None
    -        }
    -        
    -        # Bulk set the configurations if there are any
    -        if config_dict:
    -            await self.bulk_set_config_async(config_dict)
    -        
    -        self.default_llm_model = config.DEFAULT_LLM_MODEL
    
  • src/upsonic/client/team/team.py+0 182 removed
    @@ -1,182 +0,0 @@
    -from ..agent_configuration.agent_configuration import get_or_create_client, register_tools
    -from ..tasks.tasks import Task
    -from ..agent_configuration.agent_configuration import AgentConfiguration
    -from ..tasks.task_response import ObjectResponse
    -from ..direct_llm_call.direct_llm_cal import Direct
    -from typing import Any, List, Dict, Optional, Type, Union, Literal
    -from ...model_registry import ModelNames
    -
    -from ..agent_configuration.agent_configuration import AgentConfiguration as Agent
    -
    -class Team:
    -    """A callable class for multi-agent operations using the Upsonic client."""
    -    
    -    def __init__(self, agents: list[Any], tasks: list[Task] | None = None, llm_model: str | None = None, response_format: Any = None, model: ModelNames | None = None):
    -        """
    -        Initialize the Team with agents and optionally tasks.
    -        
    -        Args:
    -            agents: List of agent configurations to use
    -            tasks: List of tasks to execute (optional)
    -            llm_model: The LLM model to use (optional)
    -            response_format: The response format for the end task (optional)
    -        """
    -        self.agents = agents
    -        self.tasks = tasks if isinstance(tasks, list) else [tasks] if tasks is not None else []
    -        self.llm_model = llm_model
    -        self.response_format = response_format
    -        self.model = model
    -    def do(self, tasks: list[Task] | Task | None = None):
    -        """
    -        Execute multi-agent operations with the predefined agents and tasks.
    -        
    -        Args:
    -            tasks: Optional list of tasks or single task to execute. If not provided, uses tasks from initialization.
    -        
    -        Returns:
    -            The response from the multi-agent operation
    -        """
    -        global latest_upsonic_client
    -        from ..latest_upsonic_client import latest_upsonic_client
    -
    -        # Get or create client for agents without custom clients
    -        the_client = get_or_create_client()
    -        
    -        # Use provided tasks or fall back to initialized tasks
    -        tasks_to_execute = tasks if tasks is not None else self.tasks
    -        if not isinstance(tasks_to_execute, list):
    -            tasks_to_execute = [tasks_to_execute]
    -        
    -        # Register tools for all tasks regardless of client
    -        for task in tasks_to_execute:
    -            the_client = register_tools(the_client, task.tools)
    -            # Also register tools for agents with custom clients
    -            for agent in self.agents:
    -                if agent.client is not None:
    -                    agent.client = register_tools(agent.client, task.tools)
    -        
    -        # Update the global client reference
    -        if latest_upsonic_client is None:
    -            latest_upsonic_client = the_client
    -
    -        # Execute the multi-agent call
    -        return self.multi_agent(the_client, self.agents, tasks_to_execute, self.llm_model)
    -    
    -    def multi_agent(self, the_client: Any, agent_configurations: List[AgentConfiguration], tasks: Any, llm_model: str = None):
    -        import asyncio
    -        
    -        try:
    -            # Check if there's a running event loop
    -            loop = asyncio.get_running_loop()
    -            if loop.is_running():
    -                # If there's a running loop, run the coroutine in that loop
    -                return asyncio.run_coroutine_threadsafe(
    -                    self.multi_agent_async(the_client, agent_configurations, tasks, llm_model), 
    -                    loop
    -                ).result()
    -        except RuntimeError:
    -            # No running event loop
    -            pass
    -        
    -        # If no running loop or exception occurred, create a new one
    -        return asyncio.run(self.multi_agent_async(the_client, agent_configurations, tasks, llm_model))
    -
    -    async def multi_agent_async(self, the_client: Any, agent_configurations: List[AgentConfiguration], tasks: Any, llm_model: str = None):
    -        """
    -        Asynchronous version of the multi_agent method.
    -        """
    -        agent_tasks = []
    -        all_results = []
    -
    -        the_agents = {}
    -
    -        for each in agent_configurations:
    -            agent_key = each.agent_id[:5] + "_" + each.job_title
    -            the_agents[agent_key] = each
    -
    -        the_agents_keys = list(the_agents.keys())
    -
    -        class TheAgents_(ObjectResponse):
    -            agents: List[str]
    -
    -        the_agents_ = TheAgents_(agents=the_agents_keys)
    -
    -        class SelectedAgent(ObjectResponse):
    -            selected_agent: str
    -
    -        if isinstance(tasks, list) != True:
    -            tasks = [tasks]
    -        
    -        for each in tasks:
    -            is_end = False
    -            selected_agent = None
    -            while not is_end:
    -                selecting_task = Task(description="Select an agent for this task", images=each.images, response_format=SelectedAgent, context=[the_agents_, each])
    -                the_call_llm_model = agent_configurations[0].model
    -                await Direct.do_async(selecting_task, the_call_llm_model, retry=agent_configurations[0].retry)
    -                if selecting_task.response.selected_agent in the_agents:
    -                    is_end = True
    -                    selected_agent = selecting_task.response.selected_agent
    -            
    -            if selected_agent:
    -                agent_tasks.append({
    -                    "agent": the_agents[selected_agent],
    -                    "task": each
    -                })
    -                    
    -        # Store original client
    -        original_client = the_client
    -
    -        # Process tasks asynchronously
    -        for each in agent_tasks:
    -            # Check if agent has a custom client
    -            if each["agent"].client is not None:
    -                # Use agent's custom client for this task with async method
    -                result = await each["agent"].client.agent_async(each["agent"], each["task"], llm_model)
    -                all_results.append({
    -                    "task": each["task"].description,
    -                    "result": result
    -                })
    -            else:
    -                # Use the default/automatic client with async method
    -                result = await original_client.agent_async(each["agent"], each["task"], llm_model)
    -                all_results.append({
    -                    "task": each["task"].description,
    -                    "result": result
    -                })
    -
    -        # If there's only one task, return its result directly
    -        if len(all_results) == 1:
    -            return all_results[0]["result"]
    -
    -        # Create an end task that combines all results
    -        class OtherTask(ObjectResponse):
    -            task: str
    -            result: Any
    -
    -        # Create OtherTask objects for the context
    -        other_tasks = [
    -            OtherTask(task=result["task"], result=result["result"])
    -            for result in all_results
    -        ]
    -
    -        end_task = Task(
    -            description="Combined results from all previous tasks that in your context. You Need to prepare an final answer to your users. Dont talk about yourself or tasks directly. Just catch everything from prevously things and prepare an final return. But please try to give answer to user questions. If there is an just one question, just return that answer. If there is an multiple questions, just return all of them. but with an summary of all of them.",
    -            context=other_tasks,
    -            response_format=self.response_format
    -        )
    -
    -        end_agent = Direct(model=self.model, client=self.agents[-1].client, debug=self.agents[-1].debug)
    -        final_response = await end_agent.do_async(end_task)
    -        return final_response
    -
    -    def print_do(self):
    -        """
    -        Execute the multi-agent operation and print the result.
    -        
    -        Returns:
    -            The response from the multi-agent operation
    -        """
    -        result = self.do()
    -        print(result)
    -        return result
    
  • src/upsonic/client/tools/__init__.py+0 6 removed
    @@ -1,6 +0,0 @@
    -from .tools import ComputerUse, Search, BrowserUse
    -
    -
    -
    -
    -__all__ = ["ComputerUse", "Search", "BrowserUse"]
    \ No newline at end of file
    
  • src/upsonic/client/tools/tools.py+0 239 removed
    @@ -1,239 +0,0 @@
    -import inspect
    -import cloudpickle
    -
    -from ..level_utilized.utility import error_handler
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import dill
    -import base64
    -import httpx
    -from typing import Any, List, Dict, Optional, Type, Union, Callable
    -from pydantic import BaseModel
    -from functools import wraps
    -
    -import inspect
    -import functools
    -
    -from ..tasks.tasks import Task
    -from ...exception import NoAPIKeyException, UnsupportedLLMModelException
    -from ..printing import mcp_tool_operation
    -
    -class ComputerUse:
    -    pass
    -
    -class BrowserUse:
    -    pass
    -
    -class Search:
    -    pass
    -
    -
    -def generate_static_method_class(instance):
    -    # Store instance attributes
    -    instance_attrs = {name: value for name, value in inspect.getmembers(instance)
    -                     if not name.startswith('__') and not callable(value)}
    -    
    -    # Create new class with the same name as the original class
    -    original_class_name = type(instance).__name__
    -    TransformedClass = type(original_class_name, (), {})
    -    
    -    # Set instance attributes as class attributes
    -    for attr_name, attr_value in instance_attrs.items():
    -        setattr(TransformedClass, attr_name, attr_value)
    -
    -    # Dynamically add each method as static method to the new class
    -    for method_name, method in inspect.getmembers(instance, predicate=inspect.ismethod):
    -        if not method_name.startswith('__'):
    -            # Create a closure that captures the instance attributes
    -            def create_static_method(method, instance_attrs):
    -                if inspect.iscoroutinefunction(method):
    -                    @functools.wraps(method)
    -                    async def static_wrapper(*args, **kwargs):
    -                        # Create a new instance with the stored attributes
    -                        temp_instance = type(instance)(**{k: v for k, v in instance_attrs.items()})
    -                        return await method.__get__(temp_instance, type(instance))(*args, **kwargs)
    -                    return static_wrapper
    -                else:
    -                    @functools.wraps(method)
    -                    def static_wrapper(*args, **kwargs):
    -                        # Create a new instance with the stored attributes
    -                        temp_instance = type(instance)(**{k: v for k, v in instance_attrs.items()})
    -                        return method.__get__(temp_instance, type(instance))(*args, **kwargs)
    -                    return static_wrapper
    -
    -            static_method = staticmethod(create_static_method(method, instance_attrs))
    -            setattr(TransformedClass, method_name, static_method)
    -
    -    return TransformedClass
    -
    -class Tools:
    -    def tool(self, library: Optional[Union[str, List[str]]] = None):
    -        """
    -        Decorator to register a function or class as a tool.
    -        Can be used as @tool(), @tool("pandas"), or @tool(["pandas", "numpy"])
    -
    -        Args:
    -            library: Optional library name or list of library names to install before registering the tool
    -        """
    -        def decorator(obj: Union[Callable, Type]):
    -            # Install libraries first if specified
    -            if library:
    -                if isinstance(library, str):
    -                    self.install_library(library)
    -                else:
    -                    for lib in library:
    -                        self.install_library(lib)
    -            
    -            # If it's a class, register each method as a tool
    -            if isinstance(obj, type):
    -                class_name = obj.__name__
    -                
    -                # Get all methods that don't start with underscore
    -                methods = [(name, getattr(obj, name)) for name in dir(obj) 
    -                          if not name.startswith('_') and callable(getattr(obj, name))]
    -                
    -                # Register each method as a tool
    -                for name, method in methods:
    -                    # Convert the method to a standalone function
    -                    def create_standalone(method, full_name):
    -                        if inspect.iscoroutinefunction(method):
    -                            @wraps(method)
    -                            async def standalone(*args, **kwargs):
    -                                return await method(*args, **kwargs)
    -                        else:
    -                            @wraps(method)
    -                            def standalone(*args, **kwargs):
    -                                return method(*args, **kwargs)
    -                        standalone.__name__ = full_name
    -                        return standalone
    -                    
    -                    full_name = f"{class_name}__{name}"
    -                    standalone = create_standalone(method, full_name)
    -                    self.add_tool(standalone)
    -                
    -                return obj
    -            
    -            if isinstance(obj, object):
    -                obj = generate_static_method_class(obj)
    -
    -                # Get all methods that don't start with underscore
    -                methods = [(name, getattr(obj, name)) for name in dir(obj) 
    -                          if not name.startswith('_') and callable(getattr(obj, name))]
    -                
    -                # Register each method as a tool
    -                for name, method in methods:
    -                    # Convert the method to a standalone function
    -                    def create_standalone(method, full_name):
    -                        if inspect.iscoroutinefunction(method):
    -                            @wraps(method)
    -                            async def standalone(*args, **kwargs):
    -                                return await method(*args, **kwargs)
    -                        else:
    -                            @wraps(method)
    -                            def standalone(*args, **kwargs):
    -                                return method(*args, **kwargs)
    -                        standalone.__name__ = full_name
    -                        return standalone
    -                    
    -                    full_name = f"{obj.__name__}__{name}"
    -                    standalone = create_standalone(method, full_name)
    -                    self.add_tool(standalone)
    -                
    -            else:
    -                # Register the function as a tool
    -                if inspect.iscoroutinefunction(obj):
    -                    @wraps(obj)
    -                    async def wrapper(*args, **kwargs):
    -                        return await obj(*args, **kwargs)
    -                else:
    -                    @wraps(obj)
    -                    def wrapper(*args, **kwargs):
    -                        return obj(*args, **kwargs)
    -                
    -                self.add_tool(wrapper)
    -                return wrapper
    -                
    -        return decorator
    -
    -    def add_tool(
    -        self,
    -        function,
    -    ) -> Any:
    -        # Get the function then make a cloudpickle of it
    -        the_module = dill.detect.getmodule(function)
    -        if the_module is not None:
    -            cloudpickle.register_pickle_by_value(the_module)
    -
    -        the_dumped_function = cloudpickle.dumps(function)
    -
    -        data = {
    -            "function": base64.b64encode(the_dumped_function).decode("utf-8"),
    -        }
    -        
    -        result = self.send_request("/tools/add_tool", data)
    -        return result
    -    
    -
    -
    -    def add_mcp_tool(self, name: str, command: str, args: List[str], env: Dict[str, str] = {}) -> Dict[str, Any]:
    -        result = self.send_request("/tools/add_mcp_tool", {"name": name, "command": command, "args": args, "env": env})
    -        error_handler(result)
    -        mcp_tool_operation(f"MCP Tool: {name}", "Successfully Added")
    -        return result
    -
    -    def install_library(self, library: str) -> Dict[str, Any]:
    -        result = self.send_request("/tools/install_library", {"library": library})
    -        return result
    -
    -    def uninstall_library(self, library: str) -> Dict[str, Any]:
    -        result = self.send_request("/tools/uninstall_library", {"library": library})
    -        return result
    -
    -    def mcp(self):
    -        """
    -        Decorator to register a class as an MCP tool.
    -        Usage:
    -        @client.mcp()
    -        class ToolName:
    -            command = "command-name"
    -            args = ["arg1", "arg2"]
    -            env = {"key": "value"}
    -        """
    -        def decorator(cls):
    -            command = getattr(cls, "command", None)
    -            args = getattr(cls, "args", [])
    -            env = getattr(cls, "env", {})
    -
    -            name = cls.__name__
    -            
    -            if not command:
    -                raise ValueError("MCP tool class must have a 'command' attribute")
    -                
    -            self.add_mcp_tool(name, command, args, env)
    -            return cls
    -        return decorator
    -
    -    def sse_mcp(self):
    -        """
    -        Decorator to register a class as an MCP tool that uses Server-Sent Events (SSE).
    -        Usage:
    -        @client.sse_mcp()
    -        class ToolName:
    -            url = "https://example.com/sse"
    -        
    -        """
    -        def decorator(cls):
    -            url = getattr(cls, "url", None)
    -
    -            name = cls.__name__
    -
    -            if not url:
    -                raise ValueError("SSE MCP tool class must have a 'url' attribute")
    -            
    -            self.add_sse_mcp(name, url)
    -            return cls
    -        return decorator
    -
    -
    -    def add_sse_mcp(self, name: str, url: str) -> Dict[str, Any]:
    -        result = self.send_request("/tools/add_sse_mcp", {"name": name, "url": url})
    -        return result
    
  • src/upsonic/direct/direct_llm_cal.py+220 0 added
    @@ -0,0 +1,220 @@
    +from ..tasks.tasks import Task
    +from ..models.model_registry import ModelNames
    +from ..utils.printing import print_price_id_summary, call_end
    +from ..utils.direct_llm_call.tool_usage import tool_usage
    +from ..utils.direct_llm_call.llm_usage import llm_usage
    +from ..utils.direct_llm_call.task_end import task_end
    +from ..utils.direct_llm_call.task_start import task_start
    +from ..utils.direct_llm_call.task_response import task_response
    +from ..utils.direct_llm_call.agent_tool_register import agent_tool_register
    +from ..utils.direct_llm_call.model import get_agent_model
    +from ..utils.direct_llm_call.agent_creation import agent_create
    +from ..utils.error_wrapper import upsonic_error_handler
    +import time
    +import asyncio
    +from typing import Any, List, Union
    +from pydantic_ai import Agent as PydanticAgent, BinaryContent
    +import os
    +from ..utils.model_set import model_set
    +
    +class Direct:
    +    """Static methods for making direct LLM calls using the Upsonic."""
    +
    +    def __init__(self, 
    +                 model: ModelNames | None = None, 
    +                 debug: bool = False, 
    +                 name: str | None = None, 
    +                 company_url: str | None = None, 
    +                 company_objective: str | None = None,
    +                 company_description: str | None = None,
    +                 system_prompt: str | None = None,
    +                 memory: str | None = None,
    +                 reflection: str | None = None,
    +                 compress_context: bool = False,
    +                 agent_id_: str | None = None,
    +                 ):
    +        model = model_set(model)
    +
    +        self.model = model
    +        self.debug = debug
    +        self.default_llm_model = model
    +
    +    def _build_agent_input(self, task: Task):
    +        """
    +        Build the input for the agent run function, including images if present.
    +        
    +        Args:
    +            task: The task containing description and potentially images
    +            
    +        Returns:
    +            Either a string (description only) or a list containing description and BinaryContent objects
    +        """
    +        if not task.images:
    +            return task.description
    +            
    +        # Build input list with description and images
    +        input_list = [task.description]
    +        
    +        for image_path in task.images:
    +            try:
    +                with open(image_path, "rb") as image_file:
    +                    image_data = image_file.read()
    +                
    +                # Determine media type based on file extension
    +                file_extension = image_path.lower().split('.')[-1]
    +                media_type = f'image/{file_extension}'
    +                    
    +                input_list.append(BinaryContent(data=image_data, media_type=media_type))
    +                
    +            except Exception as e:
    +                # Log error but continue with other images
    +                if self.debug:
    +                    print(f"Warning: Could not load image {image_path}: {e}")
    +                continue
    +                
    +        return input_list
    +
    +    @upsonic_error_handler(max_retries=3, show_error_details=True)
    +    async def do_async(self, task: Union[Task, List[Task]], model: ModelNames | None = None, debug: bool = False, retry: int = 3):
    +        """
    +        Execute a direct LLM call with the given task and model asynchronously.
    +        
    +        Args:
    +            task: The task to execute or list of tasks
    +            model: The LLM model to use
    +
    +            debug: Whether to enable debug mode
    +            retry: Number of retries for failed calls (default: 3)
    +            
    +        Returns:
    +            The response from the LLM
    +        """
    +        start_time = time.time()
    +        
    +        @upsonic_error_handler(max_retries=retry, show_error_details=debug)
    +        async def _execute_single_task(single_task: Task, llm_model: ModelNames | None, task_start_time: float, task_debug: bool, task_retry: int):
    +            """
    +            Execute a single task with the LLM.
    +            
    +            Args:
    +                single_task: The task to execute
    +                llm_model: The LLM model to use
    +                task_start_time: Start time for timing
    +                task_debug: Whether to enable debug mode
    +                task_retry: Number of retries for failed calls
    +            """
    +            # LLM Selection
    +            if llm_model is None:
    +                llm_model = self.default_llm_model
    +
    +            # Start Time For Task
    +            task_start(single_task)
    +
    +            # Get the model from registry
    +            agent_model, error = get_agent_model(llm_model)
    +            if error:
    +                return error
    +
    +            # Create agent
    +            agent = await agent_create(agent_model, single_task)
    +            agent_tool_register(None, agent, single_task)
    +
    +            # Make request to the model using MCP servers context manager
    +            async with agent.run_mcp_servers():
    +                model_response = await agent.run(self._build_agent_input(single_task))
    +
    +            # Setting Task Response
    +            task_response(model_response, single_task)
    +
    +            # End Time For Task
    +            task_end(single_task)
    +            
    +            # Calculate usage and tool usage
    +            usage = llm_usage(model_response)
    +            tool_usage_result = tool_usage(model_response, single_task)
    +            
    +            # Call end logging
    +            call_end(model_response.output, llm_model, single_task.response_format, task_start_time, time.time(), usage, tool_usage_result, task_debug, single_task.price_id)
    +        
    +        # Handle single task or list of tasks
    +        if isinstance(task, list):
    +            for each_task in task:
    +                await _execute_single_task(each_task, model, start_time, debug, retry)
    +        else:
    +            await _execute_single_task(task, model, start_time, debug, retry)
    +            
    +        # Print the price ID summary if the task has a price ID
    +        if not isinstance(task, list) and not task.not_main_task:
    +            print_price_id_summary(task.price_id, task)
    +            
    +        return task.response if not isinstance(task, list) else [t.response for t in task]
    +
    +    @upsonic_error_handler(max_retries=3, show_error_details=True)
    +    async def print_do_async(self, task: Union[Task, List[Task]], model: ModelNames | None = None, debug: bool = False, retry: int = 3):
    +        """
    +        Execute a direct LLM call and print the result asynchronously.
    +        
    +        Args:
    +            task: The task to execute or list of tasks
    +            model: The LLM model to use
    +            debug: Whether to enable debug mode
    +            retry: Number of retries for failed calls (default: 3)
    +            
    +        Returns:
    +            The response from the LLM
    +        """
    +        result = await self.do_async(task, model, debug, retry)
    +        print(result)
    +        return result
    +
    +    @upsonic_error_handler(max_retries=3, show_error_details=True)
    +    def do(self, task: Union[Task, List[Task]], model: ModelNames | None = None, debug: bool = False, retry: int = 3):
    +        """
    +        Execute a direct LLM call with the given task and model synchronously.
    +        
    +        Args:
    +            task: The task to execute or list of tasks
    +            model: The LLM model to use
    +            debug: Whether to enable debug mode
    +            retry: Number of retries for failed calls (default: 3)
    +            
    +        Returns:
    +            The response from the LLM
    +        """
    +        try:
    +            loop = asyncio.get_event_loop()
    +        except RuntimeError:
    +            # No event loop running, create a new one
    +            return asyncio.run(self.do_async(task, model, debug, retry))
    +        
    +        if loop.is_running():
    +            # Event loop is already running, we need to run in a new thread
    +            import concurrent.futures
    +            with concurrent.futures.ThreadPoolExecutor() as executor:
    +                future = executor.submit(asyncio.run, self.do_async(task, model, debug, retry))
    +                return future.result()
    +        else:
    +            # Event loop exists but not running, we can use it
    +            return loop.run_until_complete(self.do_async(task, model, debug, retry))
    +
    +    @upsonic_error_handler(max_retries=3, show_error_details=True)
    +    def print_do(self, task: Union[Task, List[Task]], model: ModelNames | None = None, debug: bool = False, retry: int = 3):
    +        """
    +        Execute a direct LLM call and print the result synchronously.
    +        
    +        Args:
    +            task: The task to execute or list of tasks
    +            model: The LLM model to use
    +            debug: Whether to enable debug mode
    +            retry: Number of retries for failed calls (default: 3)
    +            
    +        Returns:
    +            The response from the LLM
    +        """
    +        result = self.do(task, model, debug, retry)
    +        print(result)
    +        return result
    +
    +
    +
    +
    
  • src/upsonic/direct/__init__.py+0 0 added
  • src/upsonic/graph/graph.py+28 25 renamed
    @@ -7,25 +7,20 @@
     from rich.progress import Progress, TextColumn, BarColumn, SpinnerColumn, TimeElapsedColumn, TimeRemainingColumn
     import uuid
     import time
    +import asyncio
     from concurrent.futures import ThreadPoolExecutor, as_completed
     
    -from .printing import console, spacing, escape_rich_markup
    -from .tasks.tasks import Task
    -from .tasks.task_response import ObjectResponse
    -from .agent_configuration.agent_configuration import AgentConfiguration
    +from ..utils.printing import console, spacing, escape_rich_markup
    +from ..tasks.tasks import Task
    +from ..tasks.task_response import ObjectResponse
    +from ..direct.direct_llm_cal import Direct as AgentConfiguration
     
     # Define DecisionResponse at module level
     class DecisionResponse(ObjectResponse):
         """Response type for LLM-based decisions that returns a boolean result."""
         result: bool
     
    -# Import Direct for type checking
    -try:
    -    from .direct_llm.direct import Direct
    -except ImportError:
    -    # Define a placeholder for type checking if the import fails
    -    class Direct:
    -        pass
    +from ..direct.direct_llm_cal import Direct  
     
     
     class DecisionLLM(BaseModel):
    @@ -60,7 +55,7 @@ def __init__(self, description: str, *, true_branch=None, false_branch=None, id=
                 id = str(uuid.uuid4())
             super().__init__(description=description, true_branch=true_branch, false_branch=false_branch, id=id, **kwargs)
         
    -    def evaluate(self, data: Any) -> bool:
    +    async def evaluate(self, data: Any) -> bool:
             """
             Evaluates the decision using an LLM with the provided data.
             
    @@ -612,7 +607,7 @@ def _get_available_agent(self) -> Any:
             # No agent found
             return None
     
    -    def _execute_task(self, node: TaskNode, state: State, verbose: bool = False) -> Any:
    +    async def _execute_task(self, node: TaskNode, state: State, verbose: bool = False) -> Any:
             """
             Executes a single task.
             
    @@ -666,8 +661,12 @@ def _execute_task(self, node: TaskNode, state: State, verbose: bool = False) ->
                 if previous_outputs and not task.context:
                     task.context = previous_outputs
                 
    -            # Execute the task - both AgentConfiguration and Direct have the do method
    -            output = runner.do(task)
    +            # Execute the task - use do_async for async execution
    +            if hasattr(runner, 'do_async'):
    +                output = await runner.do_async(task)
    +            else:
    +                # Fallback to synchronous do method if do_async is not available
    +                output = runner.do(task)
                 
                 # End timing
                 end_time = time.time()
    @@ -703,7 +702,7 @@ def _execute_task(self, node: TaskNode, state: State, verbose: bool = False) ->
                     console.print(f"[bold red]Task '{escape_rich_markup(task.description)}' failed: {escape_rich_markup(str(e))}[/bold red]")
                 raise
         
    -    def _evaluate_decision(self, decision_node: Union[DecisionFunc, DecisionLLM], state: State, verbose: bool = False) -> Union[TaskNode, TaskChain, None]:
    +    async def _evaluate_decision(self, decision_node: Union[DecisionFunc, DecisionLLM], state: State, verbose: bool = False) -> Union[TaskNode, TaskChain, None]:
             """
             Evaluates a decision node to determine which branch to follow.
             
    @@ -740,7 +739,11 @@ def _evaluate_decision(self, decision_node: Union[DecisionFunc, DecisionLLM], st
                 )
                 
                 # Execute the task using the agent
    -            response = agent.do(decision_task)
    +            if hasattr(agent, 'do_async'):
    +                response = await agent.do_async(decision_task)
    +            else:
    +                # Fallback to synchronous do method if do_async is not available
    +                response = agent.do(decision_task)
                 
                 # Get the boolean result from the structured response
                 result = response.result if hasattr(response, 'result') else False
    @@ -886,7 +889,7 @@ def _get_next_nodes(self, node: Union[TaskNode, DecisionFunc, DecisionLLM]) -> L
             
             return next_nodes
         
    -    def _run_sequential(self, verbose: bool = False, show_progress: bool = True) -> State:
    +    async def _run_sequential(self, verbose: bool = False, show_progress: bool = True) -> State:
             """
             Runs tasks sequentially with support for decision nodes.
             
    @@ -953,7 +956,7 @@ def _run_sequential(self, verbose: bool = False, show_progress: bool = True) ->
                                     if verbose:
                                         console.print(f"[dim]Setting context from previous output for task: {escape_rich_markup(node.task.description)}[/dim]")
                             
    -                        output = self._execute_task(node, self.state, verbose)
    +                        output = await self._execute_task(node, self.state, verbose)
                             self.state.update(node.id, output)
                             executed_nodes.add(node.id)
                             
    @@ -966,7 +969,7 @@ def _run_sequential(self, verbose: bool = False, show_progress: bool = True) ->
                         
                         elif isinstance(node, (DecisionFunc, DecisionLLM)):
                             # Evaluate the decision
    -                        branch = self._evaluate_decision(node, self.state, verbose)
    +                        branch = await self._evaluate_decision(node, self.state, verbose)
                             executed_nodes.add(node.id)
                             
                             # Add the appropriate branch to the execution queue
    @@ -1025,7 +1028,7 @@ def _run_sequential(self, verbose: bool = False, show_progress: bool = True) ->
                                 # Set the context for this task
                                 node.task.context = [latest_output]
                         
    -                    output = self._execute_task(node, self.state, verbose)
    +                    output = await self._execute_task(node, self.state, verbose)
                         self.state.update(node.id, output)
                         executed_nodes.add(node.id)
                         
    @@ -1038,7 +1041,7 @@ def _run_sequential(self, verbose: bool = False, show_progress: bool = True) ->
                     
                     elif isinstance(node, (DecisionFunc, DecisionLLM)):
                         # Evaluate the decision
    -                    branch = self._evaluate_decision(node, self.state, verbose)
    +                    branch = await self._evaluate_decision(node, self.state, verbose)
                         executed_nodes.add(node.id)
                         
                         # Add the appropriate branch to the execution queue
    @@ -1119,7 +1122,7 @@ def _count_all_possible_nodes(self) -> int:
             # Return the count, minimum of 1 to avoid division by zero
             return max(len(counted), 1)
         
    -    def run(self, verbose: bool = True, show_progress: bool = None) -> State:
    +    async def run(self, verbose: bool = True, show_progress: bool = None) -> State:
             """
             Executes the graph, running all tasks in the appropriate order.
             
    @@ -1142,7 +1145,7 @@ def run(self, verbose: bool = True, show_progress: bool = None) -> State:
             self.state = State()
             
             # With decision support, we always use the sequential implementation for now
    -        return self._run_sequential(verbose, show_progress)
    +        return await self._run_sequential(verbose, show_progress)
         
         def get_output(self) -> Any:
             """
    @@ -1269,4 +1272,4 @@ def _task_rshift(self, other):
         return chain
     
     # Apply the patch to the Task class
    -Task.__rshift__ = _task_rshift
    +Task.__rshift__ = _task_rshift
    \ No newline at end of file
    
  • src/upsonic/graph/__init__.py+0 0 added
  • src/upsonic/__init__.py+34 12 modified
    @@ -5,25 +5,47 @@
     warnings.filterwarnings("ignore", category=DeprecationWarning)
     
     
    -from .client.base import UpsonicClient
    -from .client.tasks.task_response import ObjectResponse
    -from .client.tasks.tasks import Task
    -from .client.agent_configuration.agent_configuration import AgentConfiguration
    -from .client.agent_configuration.agent_configuration import AgentConfiguration as Agent
    -from .client.knowledge_base.knowledge_base import KnowledgeBase
    -from .client.direct_llm_call.direct_llm_cal import Direct
    -from .client.team.team import Team
     
    +from .tasks.tasks import Task
     
    -from .client.storage.storage import ClientConfig
    +from .knowledge_base.knowledge_base import KnowledgeBase
    +from .direct.direct_llm_cal import Direct
    +from .direct.direct_llm_cal import Direct as Agent
    +from .graph.graph import Graph
    +
    +# Export error handling components for advanced users
    +from .utils.package.exception import (
    +    UupsonicError, 
    +    AgentExecutionError, 
    +    ModelConnectionError, 
    +    TaskProcessingError, 
    +    ConfigurationError, 
    +    RetryExhaustedError,
    +    NoAPIKeyException
    +)
    +from .utils.error_wrapper import upsonic_error_handler
     
    -from .client.graph import Graph, DecisionFunc, DecisionLLM
     
    -from pydantic import Field
     
     
     def hello() -> str:
         return "Hello from upsonic!"
     
     
    -__all__ = ["hello", "UpsonicClient", "ObjectResponse","Task", "StrInListResponse", "AgentConfiguration", "Field", "KnowledgeBase", "ClientConfig", "Agent", "Direct", "Team", "Graph", "DecisionFunc", "DecisionLLM"]
    +__all__ = [
    +    "hello", 
    +    "Task", 
    +    "KnowledgeBase", 
    +    "Direct", 
    +    "Agent",
    +    "Graph",
    +    # Error handling exports
    +    "UupsonicError",
    +    "AgentExecutionError", 
    +    "ModelConnectionError", 
    +    "TaskProcessingError", 
    +    "ConfigurationError", 
    +    "RetryExhaustedError",
    +    "NoAPIKeyException",
    +    "upsonic_error_handler"
    +]
    
  • src/upsonic/knowledge_base/__init__.py+0 0 added
  • src/upsonic/knowledge_base/knowledge_base.py+23 4 renamed
    @@ -1,7 +1,7 @@
     from dataclasses import Field
     import uuid
     from pydantic import BaseModel
    -
    +from ..utils.error_wrapper import upsonic_error_handler
     
     from typing import Any, List, Dict, Optional, Type, Union
     
    @@ -29,6 +29,7 @@ def add_file(self, file_path: str):
         def remove_file(self, file_path: str):
             self.sources.remove(file_path)
     
    +    @upsonic_error_handler(max_retries=2, show_error_details=True)
         async def setup_rag(self, client):
             from lightrag import LightRAG, QueryParam
             from lightrag.llm.openai import openai_embed, gpt_4o_mini_complete
    @@ -53,6 +54,7 @@ async def setup_rag(self, client):
     
     
     
    +    @upsonic_error_handler(max_retries=2, show_error_details=True)
         async def query(self, query: str, mode: str = "naive") -> List[str]:
             from lightrag import LightRAG, QueryParam
             from lightrag.llm.openai import openai_embed, gpt_4o_mini_complete
    @@ -81,16 +83,33 @@ async def query(self, query: str, mode: str = "naive") -> List[str]:
     
     
     
    -    def markdown(self, client):
    +    @upsonic_error_handler(max_retries=1, show_error_details=True)
    +    def markdown(self):
             knowledge_base = KnowledgeBaseMarkdown(knowledges={})
             the_list_of_files = self.sources
             
     
             for each in the_list_of_files:
    -            markdown_content = client.markdown(each)
    +
    +            # Convert to markdown
    +            from markitdown import MarkItDown
    +
    +            md = MarkItDown()
    +            markdown_content = md.convert(each).text_content
    +
     
                 knowledge_base.knowledges[each] = markdown_content
     
     
    +        the_overall_string = ""
    +    
    +        for each in knowledge_base.knowledges:
    +            the_overall_string += f"""
    +            <{each}>
    +            {knowledge_base.knowledges[each]}
    +            </{each}>
    +            \n\n
    +            """
    +        
    +        return the_overall_string
     
    -        return knowledge_base
    \ No newline at end of file
    
  • src/upsonic/models/__init__.py+0 0 added
  • src/upsonic/models/model_registry.py+37 3 renamed
    @@ -1,3 +1,4 @@
    +import os
     from pydantic_ai.settings import ModelSettings
     from decimal import Decimal
     from pydantic_ai.models.openai import OpenAIModelSettings
    @@ -11,6 +12,10 @@
         "openai/gpt-4.5-preview",
         "openai/o3-mini",
         "openai/gpt-4o-mini",
    +    "openai/gpt-4.1-nano",
    +    "openai/gpt-4.1-mini",
    +    "openai/gpt-4.1",
    +    "openai/o4-mini",
         "azure/gpt-4o",
         "azure/gpt-4o-mini",
         "claude/claude-3-5-sonnet",
    @@ -63,7 +68,20 @@
             "pricing": {"input": 75.00, "output": 150.00},
             "required_environment_variables": ["OPENAI_API_KEY"]
         },
    -
    +    "openai/gpt-4.1-nano": {
    +        "provider": "openai", 
    +        "model_name": "gpt-4.1-nano", 
    +        "capabilities": [],
    +        "pricing": {"input": 0.10, "output": 0.40},
    +        "required_environment_variables": ["OPENAI_API_KEY"]
    +    },
    +    "openai/gpt-4.1-mini": {
    +        "provider": "openai", 
    +        "model_name": "gpt-4.1-mini", 
    +        "capabilities": [],
    +        "pricing": {"input": 0.40, "output": 1.60},
    +        "required_environment_variables": ["OPENAI_API_KEY"]
    +    },
         "openai/o3-mini": {
             "provider": "openai", 
             "model_name": "o3-mini", 
    @@ -78,7 +96,20 @@
             "pricing": {"input": 0.15, "output": 0.60},
             "required_environment_variables": ["OPENAI_API_KEY"]
         },
    -    
    +    "openai/gpt-4.1": {
    +        "provider": "openai", 
    +        "model_name": "gpt-4.1", 
    +        "capabilities": [],
    +        "pricing": {"input": 2.0, "output": 8.0},
    +        "required_environment_variables": ["OPENAI_API_KEY"]
    +    },
    +    "openai/o4-mini": {
    +        "provider": "openai", 
    +        "model_name": "o4-mini", 
    +        "capabilities": [],
    +        "pricing": {"input": 1.10, "output": 4.40},
    +        "required_environment_variables": ["OPENAI_API_KEY"]
    +    },
         # Azure OpenAI models
         "azure/gpt-4o": {
             "provider": "azure_openai", 
    @@ -222,6 +253,9 @@
     # Helper functions for model registry access
     
     def get_model_registry_entry(llm_model: str):
    +
    +
    +    
         """Get model registry entry or return None if not found."""
         if llm_model in MODEL_REGISTRY:
             return MODEL_REGISTRY[llm_model]
    @@ -244,7 +278,7 @@ def get_model_registry_entry(llm_model: str):
             if model_id.lower() == llm_model_lower:
                 return details
         
    -    print(f"Warning: Model '{llm_model}' not found in registry")
    +
         return None
     
     def get_model_family(provider_type: str):
    
  • src/upsonic/reliability_processor.py+0 375 removed
    @@ -1,375 +0,0 @@
    -from copy import deepcopy
    -from typing import Any, Optional, Union, Type, List
    -from pydantic import BaseModel, Field
    -from enum import Enum
    -import re
    -from urllib.parse import urlparse
    -import requests
    -import asyncio
    -from .client.tasks.tasks import Task
    -from .client.agent_configuration.agent_configuration import AgentConfiguration
    -from .client.tasks.task_response import ObjectResponse
    -
    -# Define the validation prompts
    -url_validation_prompt = """
    -Focus on basic URL source validation:
    -
    -Source Verification:
    -- Check if the source is come from the content. But dont make assumption just check the context and try to find exact things. If not flag it.
    -- If you can see the things in the context everything okay (Trusted Source).
    -
    -IMPORTANT: If the URL source cannot be verified, flag it as suspicious.
    -"""
    -
    -number_validation_prompt = """
    -Focus on basic numerical validation:
    -
    -Number Verification:
    -- Check if the source is come from the content. But dont make assumption just check the context and try to find exact things. If not flag it.
    -- If you can see the things in the context everything okay (Trusted Source).
    -
    -IMPORTANT: If the numbers cannot be verified, flag them as suspicious.
    -"""
    -
    -code_validation_prompt = """
    -Focus on basic code validation:
    -
    -Code Verification:
    -- Check if the source is come from the content. But dont make assumption just check the context and try to find exact things. If not flag it.
    -- If you can see the things in the context everything okay (Trusted Source).
    -
    -IMPORTANT: If the code cannot be verified or appears suspicious, flag it as suspicious.
    -"""
    -
    -information_validation_prompt = """
    -Focus on basic information validation:
    -
    -Information Verification:
    -- Check if the source is come from the content. But dont make assumption just check the context and try to find exact things. If not flag it.
    -- If you can see the things in the context everything okay (Trusted Source).
    -
    -IMPORTANT: If the information cannot be verified, flag it as suspicious.
    -"""
    -
    -editor_task_prompt = """
    -Clean and validate the output by handling suspicious content:
    -
    -Processing Rules:
    -1. For ANY suspicious content identified in validation:
    -- Replace the suspicious value with None
    -- Do not suggest alternatives
    -- Do not provide explanations
    -- Do not modify other parts of the content
    -
    -2. For non-suspicious content:
    -- Keep the original value unchanged
    -- Do not enhance or modify
    -- Do not add additional information
    -
    -Processing Steps:
    -- Set suspicious fields to None
    -- Keep other fields as is
    -- Remove any suspicious content entirely
    -- Maintain original structure
    -
    -Validation Issues Found:
    -{validation_feedback}
    -
    -IMPORTANT:
    -- Set ALL suspicious values to None
    -- Keep verified values unchanged
    -- No explanations or suggestions
    -- No partial validations
    -- Maintain response format
    -"""
    -
    -class SourceReliability(Enum):
    -    HIGH = "high"
    -    MEDIUM = "medium"
    -    LOW = "low"
    -    UNKNOWN = "unknown"
    -
    -class ValidationPoint(ObjectResponse):
    -    is_suspicious: bool
    -    feedback: str
    -    suspicious_points: list[str] = Field(description = "Suspicious informations raw name")
    -    source_reliability: SourceReliability = SourceReliability.UNKNOWN
    -    verification_method: str = ""
    -    confidence_score: float = 0.0
    -
    -class ValidationResult(ObjectResponse):
    -    url_validation: ValidationPoint
    -    number_validation: ValidationPoint
    -    information_validation: ValidationPoint
    -    code_validation: ValidationPoint
    -    any_suspicion: bool
    -    suspicious_points: list[str]
    -    overall_feedback: str
    -    overall_confidence: float = 0.0
    -
    -    def calculate_suspicion(self) -> str:
    -        self.any_suspicion = any([
    -            self.url_validation.is_suspicious,
    -            self.number_validation.is_suspicious,
    -            self.information_validation.is_suspicious,
    -            self.code_validation.is_suspicious
    -        ])
    -
    -        self.suspicious_points = []
    -        validation_details = []
    -
    -        # Collect URL validation details
    -        if self.url_validation.is_suspicious:
    -            self.suspicious_points.extend(self.url_validation.suspicious_points)
    -            validation_details.append(f"URL Issues: {self.url_validation.feedback}")
    -            validation_details.extend([f"- {point}" for point in self.url_validation.suspicious_points])
    -
    -        # Collect number validation details
    -        if self.number_validation.is_suspicious:
    -            self.suspicious_points.extend(self.number_validation.suspicious_points)
    -            validation_details.append(f"Number Issues: {self.number_validation.feedback}")
    -            validation_details.extend([f"- {point}" for point in self.number_validation.suspicious_points])
    -            
    -        # Collect information validation details
    -        if self.information_validation.is_suspicious:
    -            self.suspicious_points.extend(self.information_validation.suspicious_points)
    -            validation_details.append(f"Information Issues: {self.information_validation.feedback}")
    -            validation_details.extend([f"- {point}" for point in self.information_validation.suspicious_points])
    -
    -        # Collect code validation details
    -        if self.code_validation.is_suspicious:
    -            self.suspicious_points.extend(self.code_validation.suspicious_points)
    -            validation_details.append(f"Code Issues: {self.code_validation.feedback}")
    -            validation_details.extend([f"- {point}" for point in self.code_validation.suspicious_points])
    -
    -        # Calculate overall confidence
    -        self.overall_confidence = sum([
    -            self.url_validation.confidence_score,
    -            self.number_validation.confidence_score,
    -            self.information_validation.confidence_score,
    -            self.code_validation.confidence_score
    -        ]) / 4.0
    -
    -        # Generate overall feedback
    -        if validation_details:
    -            self.overall_feedback = "\n".join(validation_details)
    -        else:
    -            self.overall_feedback = "No suspicious content detected."
    -
    -        # Return complete validation summary for editor
    -        validation_summary = [
    -            "Validation Summary:",
    -            f"Overall Confidence: {self.overall_confidence:.2f}",
    -            f"Suspicious Content Detected: {'Yes' if self.any_suspicion else 'No'}",
    -            "\nDetailed Feedback:",
    -            self.overall_feedback
    -        ]
    -        
    -        return "\n".join(validation_summary)
    -
    -class ReliabilityProcessor:
    -    def __init__(self, confidence_threshold: float = 0.7):
    -        self.confidence_threshold = confidence_threshold
    -
    -    @staticmethod
    -    async def process_result(
    -        result: Any,
    -        reliability_layer: Optional[Any] = None,
    -        task: Optional[Task] = None,
    -        llm_model: Optional[str] = None
    -    ) -> Any:
    -        if reliability_layer is None:
    -            return result
    -    
    -        old_task_output = result
    -        try:
    -            old_task_output = result.model_dump()
    -        except:
    -            pass
    -
    -        prevent_hallucination = getattr(reliability_layer, 'prevent_hallucination', 0)
    -        if isinstance(prevent_hallucination, property):
    -            prevent_hallucination = prevent_hallucination.fget(reliability_layer)
    -
    -        processed_result = result
    -
    -        if prevent_hallucination > 0:
    -            if prevent_hallucination == 10:
    -                copy_task = deepcopy(task)
    -                copy_task._response = result
    -
    -                validation_result = ValidationResult(
    -                    url_validation=ValidationPoint(
    -                        is_suspicious=False, 
    -                        feedback="",
    -                        suspicious_points=[],
    -                        source_reliability=SourceReliability.UNKNOWN,
    -                        verification_method="",
    -                        confidence_score=0.0
    -                    ),
    -                    number_validation=ValidationPoint(
    -                        is_suspicious=False, 
    -                        feedback="",
    -                        suspicious_points=[],
    -                        source_reliability=SourceReliability.UNKNOWN,
    -                        verification_method="",
    -                        confidence_score=0.0
    -                    ),
    -                    information_validation=ValidationPoint(
    -                        is_suspicious=False, 
    -                        feedback="",
    -                        suspicious_points=[],
    -                        source_reliability=SourceReliability.UNKNOWN,
    -                        verification_method="",
    -                        confidence_score=0.0
    -                    ),
    -                    code_validation=ValidationPoint(
    -                        is_suspicious=False, 
    -                        feedback="",
    -                        suspicious_points=[],
    -                        source_reliability=SourceReliability.UNKNOWN,
    -                        verification_method="",
    -                        confidence_score=0.0
    -                    ),
    -                    any_suspicion=False,
    -                    suspicious_points=[],
    -                    overall_feedback=""
    -                )
    -
    -                # Create a list to store validation tasks
    -                validation_tasks = []
    -                validation_types = []
    -                validator_agents = {}
    -
    -                # Process context strings once for all validations
    -                context_strings = []
    -                context_strings.append(f"Given Task: {copy_task.description}")
    -
    -                # Process context items if they exist
    -                if copy_task.context:
    -                    context_items = copy_task.context if isinstance(copy_task.context, list) else [copy_task.context]
    -                    if copy_task.response_format:
    -                        context_items.append(copy_task.response_format)
    -                    for item in context_items:
    -                        type_string = type(item).__name__
    -                        the_class_string = None
    -                        try:
    -                            the_class_string = item.__bases__[0].__name__
    -                        except:
    -                            pass
    -
    -                        if the_class_string == ObjectResponse.__name__ or the_class_string == BaseModel.__name__:
    -                            context_strings.append(f"\n\nUser requested output: ```Requested Output {item.model_fields}```")
    -                        elif isinstance(item, str):
    -                            context_strings.append(f"\n\nContext That Came From User (Trusted Source): ```User given context {item}```")
    -                        else:
    -                            pass
    -
    -                # Add the current AI response to context
    -                context_strings.append(f"\nCurrent AI Response (Untrusted Source, last AI responose that we are checking now): {old_task_output}")
    -
    -                # Prepare validation tasks
    -                for validation_type, prompt in [
    -                    ("url_validation", url_validation_prompt),
    -                    ("number_validation", number_validation_prompt),
    -                    ("information_validation", information_validation_prompt),
    -                    ("code_validation", code_validation_prompt),
    -                ]:
    -                    # Create a specific agent for each validation type
    -                    agent_name = f"{validation_type.replace('_', ' ').title()} Agent"
    -                    validator_agents[validation_type] = AgentConfiguration(
    -                        agent_name,
    -                        model=llm_model,
    -                        sub_task=False
    -                    )
    -                    
    -                    # For URL validation, skip if no URLs are present
    -                    if validation_type == "url_validation":
    -                        if not contains_urls([prompt] + context_strings):
    -                            # Set a default "no URLs found" validation point
    -                            setattr(validation_result, validation_type, ValidationPoint(
    -                                is_suspicious=False,
    -                                feedback="No URLs found in content to validate",
    -                                suspicious_points=[],
    -                                source_reliability=SourceReliability.UNKNOWN,
    -                                verification_method="regex_url_detection",
    -                                confidence_score=1.0
    -                            ))
    -                            continue
    -
    -                    # Create validation task
    -                    validator_task = Task(
    -                        prompt,
    -                        images=task.images,
    -                        response_format=ValidationPoint,
    -                        tools=task.tools,
    -                        context=context_strings,  # Pass the processed context strings
    -                        price_id_=task.price_id,
    -                        not_main_task=True
    -                    )
    -                    
    -                    # Add task to the list
    -                    validation_tasks.append(validator_task)
    -                    validation_types.append(validation_type)
    -
    -                # Execute all validation tasks in parallel if there are any
    -                if validation_tasks:
    -                    # Run each validation task with its specific agent
    -                    validation_coroutines = []
    -                    for i, validation_type in enumerate(validation_types):
    -                        validation_coroutines.append(
    -                            validator_agents[validation_type].do_async(validation_tasks[i])
    -                        )
    -                    
    -                    # Wait for all validation tasks to complete
    -                    await asyncio.gather(*validation_coroutines)
    -                    
    -                    # Process results
    -                    for i, validation_type in enumerate(validation_types):
    -                        setattr(validation_result, validation_type, validation_tasks[i].response)
    -
    -                validation_result.calculate_suspicion()
    -
    -                if validation_result.any_suspicion:
    -                    editor_agent = AgentConfiguration(
    -                        "Information Editor Agent",
    -                        model=llm_model,
    -                        sub_task=False
    -                    )
    -                    formatted_prompt = editor_task_prompt.format(
    -                        validation_feedback=validation_result.overall_feedback
    -                    )
    -                    formatted_prompt += f"OLD AI Response: {old_task_output}"
    -
    -                    the_context = [copy_task, copy_task.response_format, validation_result]
    -                    the_context += copy_task.context
    -                    editor_task = Task(
    -                        formatted_prompt,
    -                        images=task.images,
    -                        context=the_context,
    -                        response_format=task.response_format,
    -                        tools=task.tools,
    -                        price_id_=task.price_id,
    -                        not_main_task=True
    -                    )
    -                    await editor_agent.do_async(editor_task)
    -                    return editor_task.response
    -
    -                return result
    -
    -        return processed_result
    -
    -def find_urls_in_text(text: str) -> List[str]:
    -    """Find all URLs in the given text using regex pattern matching."""
    -    # This pattern matches URLs starting with http://, https://, ftp://, or www.
    -    url_pattern = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
    -    return re.findall(url_pattern, text)
    -
    -def contains_urls(texts: List[str]) -> bool:
    -    """Check if any of the provided texts contain URLs."""
    -    for text in texts:
    -        if not isinstance(text, str):
    -            continue
    -        if find_urls_in_text(text):
    -            return True
    -    return False
    \ No newline at end of file
    
  • src/upsonic/server/api.py+0 124 removed
    @@ -1,124 +0,0 @@
    -from fastapi import FastAPI, HTTPException
    -from functools import wraps
    -import asyncio
    -import httpx
    -
    -from fastapi import FastAPI, HTTPException, Request, Response
    -import asyncio
    -from functools import wraps
    -from ..exception import TimeoutException
    -import inspect
    -from starlette.responses import JSONResponse
    -import threading
    -import time
    -import traceback
    -import logging
    -import os
    -
    -# Configure logging
    -logging.basicConfig(level=logging.ERROR)
    -logger = logging.getLogger(__name__)
    -
    -app = FastAPI()
    -
    -# Remove the middleware and use exception handlers instead
    -@app.exception_handler(Exception)
    -async def exception_handler(request: Request, exc: Exception):
    -    tb = traceback.extract_tb(exc.__traceback__)
    -    file_path = tb[-1].filename
    -    if "Upsonic/src/" in file_path:
    -        file_path = file_path.split("Upsonic/src/")[1]
    -    line_number = tb[-1].lineno
    -    logging.error(f"Error in {file_path} at line {line_number}: {exc}", exc_info=True)
    -    return JSONResponse(
    -        status_code=500,
    -        content={"detail": f"Error in {file_path} at line {line_number}: {str(exc)}"}
    -    )
    -
    -def handle_server_errors(func):
    -    """
    -    Decorator to catch internal server errors, print the traceback,
    -    and return a standardized error response.
    -    """
    -    @wraps(func)
    -    async def async_wrapper(*args, **kwargs):
    -        try:
    -            if inspect.iscoroutinefunction(func):
    -                return await func(*args, **kwargs)
    -            else:
    -                return func(*args, **kwargs)
    -        except Exception as e:
    -            tb = traceback.extract_tb(e.__traceback__)
    -            file_path = tb[-1].filename
    -            if "Upsonic/src/" in file_path:
    -                file_path = file_path.split("Upsonic/src/")[1]
    -            line_number = tb[-1].lineno
    -            traceback.print_exc()
    -            return {"result": {"status_code": 500, "detail": f"Error processing Call request in {file_path} at line {line_number}: {str(e)}"}, "status_code": 500}
    -
    -    @wraps(func)
    -    def sync_wrapper(*args, **kwargs):
    -        try:
    -            return func(*args, **kwargs)
    -        except Exception as e:
    -            tb = traceback.extract_tb(e.__traceback__)
    -            file_path = tb[-1].filename
    -            if "Upsonic/src/" in file_path:
    -                file_path = file_path.split("Upsonic/src/")[1]
    -            line_number = tb[-1].lineno
    -            traceback.print_exc()
    -            return {"result": {"status_code": 500, "detail": f"Error processing Call request in {file_path} at line {line_number}: {str(e)}"}, "status_code": 500}
    -
    -    return async_wrapper if inspect.iscoroutinefunction(func) else sync_wrapper
    -
    -@app.get("/status")
    -async def get_status():
    -    return {"status": "Server is running"}
    -
    -
    -def timeout(seconds: float):
    -    def decorator(func):
    -        @wraps(func)
    -        async def async_wrapper(*args, **kwargs):
    -            try:
    -                # Create a task for the function
    -                task = asyncio.create_task(func(*args, **kwargs))
    -                # Wait for the task to complete with timeout
    -                result = await asyncio.wait_for(task, timeout=seconds)
    -                return result
    -            except asyncio.TimeoutError:
    -                raise HTTPException(
    -                    status_code=408,
    -                    detail=f"Function timed out after {seconds} seconds"
    -                )
    -
    -        @wraps(func)
    -        def sync_wrapper(*args, **kwargs):
    -            # For synchronous functions, we'll use a thread-based approach
    -            result = []
    -            error = []
    -            
    -            def target():
    -                try:
    -                    result.append(func(*args, **kwargs))
    -                except Exception as e:
    -                    error.append(e)
    -            
    -            thread = threading.Thread(target=target)
    -            thread.daemon = True
    -            thread.start()
    -            thread.join(timeout=seconds)  # Wait for the specified timeout
    -            
    -            if thread.is_alive():
    -                raise HTTPException(
    -                    status_code=408,
    -                    detail=f"Function timed out after {seconds} seconds"
    -                )
    -            
    -            if error:
    -                raise error[0]
    -            
    -            return result[0]
    -
    -        return async_wrapper if inspect.iscoroutinefunction(func) else sync_wrapper
    -    return decorator
    \ No newline at end of file
    
  • src/upsonic/server/__init__.py+0 120 removed
    @@ -1,120 +0,0 @@
    -from ..storage.configuration import Configuration
    -from .level_one.call import Call
    -from ..server_manager import ServerManager
    -
    -from .api import app
    -from .level_one.server.server import *
    -from .level_two.server.server import *
    -from .storage.server.server import *
    -from .tools.server import *
    -from .markdown.server.server import *
    -from .others.server.server import *
    -
    -import warnings
    -import threading
    -import time
    -import concurrent.futures
    -import traceback
    -from multiprocessing import freeze_support
    -from ..tools_server import run_tools_server, stop_tools_server, is_tools_server_running
    -
    -warnings.filterwarnings("ignore", category=UserWarning)
    -warnings.filterwarnings("ignore", category=ResourceWarning)
    -warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
    -
    -_server_manager = ServerManager(
    -    app_path="upsonic.server.api:app",
    -    host="localhost",
    -    port=7541,
    -    name="main"
    -)
    -
    -def run_main_server(redirect_output: bool = False):
    -    """Start the main server if it's not already running."""
    -    _server_manager.start(redirect_output=redirect_output)
    -
    -def run_main_server_internal(reload: bool = True):
    -    """Run the main server directly (for development)"""
    -    import uvicorn
    -    uvicorn.run("upsonic.server.api:app", host="0.0.0.0", port=7541, reload=reload)
    -
    -def stop_main_server():
    -    """Stop the main server if it's running."""
    -    _server_manager.stop()
    -
    -def is_main_server_running() -> bool:
    -    """Check if the main server is currently running."""
    -    return _server_manager.is_running()
    -
    -def _start_server(server_func, server_name, redirect_output=True):
    -    """Start a server"""
    -    try:
    -        # Always start the server fresh
    -        server_func(redirect_output=redirect_output)
    -        return True
    -    except Exception as e:
    -        print(f"\nError starting {server_name} server:")
    -        print("=" * 60)
    -        traceback.print_exc()
    -        print("=" * 60)
    -        return False
    -
    -def run_dev_server(redirect_output=True):
    -    """Run both main and tools servers for development with maximum parallelism"""
    -    
    -    try:
    -        # Use ThreadPoolExecutor to run both servers in parallel
    -        with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
    -            # Submit both server start tasks
    -            main_future = executor.submit(
    -                _start_server,
    -                run_main_server,
    -                "main",
    -                redirect_output
    -            )
    -            
    -            tools_future = executor.submit(
    -                _start_server,
    -                run_tools_server,
    -                "tools", 
    -                redirect_output
    -            )
    -            
    -            # Wait for both to complete with a timeout
    -            try:
    -                main_result = main_future.result(timeout=150)
    -                tools_result = tools_future.result(timeout=150)
    -                
    -                if not main_result or not tools_result:
    -                    # Clean up if either server failed
    -                    print("\nOne or both servers failed to start. Stopping all servers...")
    -                    stop_dev_server()
    -                    raise RuntimeError("Failed to start servers - check above logs for details")
    -                
    -                # Add a small delay to ensure servers are fully initialized
    -                time.sleep(0.5)
    -
    -
    -                return
    -                
    -            except concurrent.futures.TimeoutError:
    -                print("\nTimeout occurred while starting servers")
    -                stop_dev_server()
    -                raise RuntimeError("Timeout waiting for servers to start")
    -    except Exception as e:
    -        print("\nUnexpected error in run_dev_server:")
    -        print("=" * 60)
    -        traceback.print_exc()
    -        print("=" * 60)
    -        raise
    -
    -def stop_dev_server():
    -    """Stop both main and tools servers"""
    -    stop_main_server()
    -    stop_tools_server()
    -
    -if __name__ == '__main__':
    -    freeze_support()
    -
    -__all__ = ["Configuration", "Call", "app", "run_main_server", "stop_main_server", 
    -           "is_main_server_running", "run_main_server_internal", "run_dev_server", "stop_dev_server"]
    
  • src/upsonic/server/level_one/call.py+0 60 removed
    @@ -1,60 +0,0 @@
    -from pydantic import BaseModel
    -from typing import Any, Optional, List
    -from pydantic_ai.messages import ImageUrl
    -
    -from ...storage.configuration import Configuration
    -
    -from ..level_utilized.utility import (
    -    agent_creator, 
    -    prepare_message_history, 
    -    process_error_traceback,
    -    format_response,
    -    handle_compression_retry
    -)
    -
    -import openai
    -import traceback
    -
    -
    -class CallManager:
    -    async def gpt_4o(
    -        self,
    -        prompt: str,
    -        images: Optional[List[str]] = None,
    -        response_format: BaseModel = str,
    -        tools: list[str] = [],
    -        context: Any = None,
    -        llm_model: str = "openai/gpt-4o",
    -        system_prompt: Optional[Any] = None 
    -    ):
    -        try:
    -            roulette_agent = agent_creator(response_format, tools, context, llm_model, system_prompt)
    -            if isinstance(roulette_agent, dict) and "status_code" in roulette_agent:
    -                return roulette_agent  # Return error from agent_creator
    -
    -            message_history = prepare_message_history(prompt, images, llm_model, tools)
    -
    -            try:
    -                print("I sent the request1")
    -                result = await roulette_agent.run(message_history)
    -                print("I got the response1")
    -                return format_response(result)
    -            except openai.BadRequestError as e:
    -                str_e = str(e)
    -                if "400" in str_e:
    -                    # Try to compress the message prompt
    -                    try:
    -                        result = await handle_compression_retry(
    -                            prompt, images, tools, llm_model, 
    -                            response_format, context, system_prompt
    -                        )
    -                        return format_response(result)
    -                    except Exception as e:
    -                        traceback.print_exc()
    -                        return process_error_traceback(e)
    -                else:
    -                    return process_error_traceback(e)
    -        except Exception as e:
    -            return process_error_traceback(e)
    -
    -Call = CallManager()
    
  • src/upsonic/server/level_one/server/server.py+0 100 removed
    @@ -1,100 +0,0 @@
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -from typing import List, Dict, Any, Optional, Union
    -import traceback
    -from ...api import app, timeout, handle_server_errors
    -from ..call import Call
    -import asyncio
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -import os
    -
    -
    -prefix = "/level_one"
    -
    -
    -class GPT4ORequest(BaseModel):
    -    prompt: str
    -    images: Optional[List[str]] = None
    -    response_format: Optional[Any] = []
    -    tools: Optional[Any] = []
    -    context: Optional[Any] = None
    -    llm_model: Optional[Any] = "openai/gpt-4o"
    -    system_prompt: Optional[Any] = None
    -
    -
    -@app.post(f"{prefix}/gpt4o")
    -@handle_server_errors
    -async def call_gpt4o(request: GPT4ORequest):
    -    """
    -    Endpoint to call GPT-4 with optional tools and MCP servers.
    -
    -    Args:
    -        request: GPT4ORequest containing prompt and optional parameters
    -
    -    Returns:
    -        The response from the AI model
    -    """
    -    try:
    -        # Handle pickled response format
    -        if request.response_format != "str":
    -            try:
    -                # Decode and unpickle the response format
    -                pickled_data = base64.b64decode(request.response_format)
    -                response_format = cloudpickle.loads(pickled_data)
    -            except Exception as e:
    -                tb = traceback.extract_tb(e.__traceback__)
    -                file_path = tb[-1].filename
    -                if "Upsonic/src/" in file_path:
    -                    file_path = file_path.split("Upsonic/src/")[1]
    -                line_number = tb[-1].lineno
    -                traceback.print_exc()
    -                # Fallback to basic type mapping if unpickling fails
    -                type_mapping = {
    -                    "str": str,
    -                    "int": int,
    -                    "float": float,
    -                    "bool": bool,
    -                }
    -                response_format = type_mapping.get(request.response_format, str)
    -        else:
    -            response_format = str
    -
    -        if request.context is not None:
    -            try:
    -                pickled_context = base64.b64decode(request.context)
    -                context = cloudpickle.loads(pickled_context)
    -            except Exception as e:
    -                tb = traceback.extract_tb(e.__traceback__)
    -                file_path = tb[-1].filename
    -                if "Upsonic/src/" in file_path:
    -                    file_path = file_path.split("Upsonic/src/")[1]
    -                line_number = tb[-1].lineno
    -                traceback.print_exc()
    -                context = None
    -        else:
    -            context = None
    -
    -        result = await Call.gpt_4o(
    -            prompt=request.prompt,
    -            images=request.images,
    -            response_format=response_format,
    -            tools=request.tools,
    -            context=context,
    -            llm_model=request.llm_model,
    -            system_prompt=request.system_prompt
    -        )
    -
    -        if request.response_format != "str" and result["status_code"] == 200:
    -            result["result"] = cloudpickle.dumps(result["result"])
    -            result["result"] = base64.b64encode(result["result"]).decode('utf-8')
    -        return {"result": result, "status_code": 200}
    -    except Exception as e:
    -        tb = traceback.extract_tb(e.__traceback__)
    -        file_path = tb[-1].filename
    -        if "Upsonic/src/" in file_path:
    -            file_path = file_path.split("Upsonic/src/")[1]
    -        line_number = tb[-1].lineno
    -        traceback.print_exc()
    -        return {"result": {"status_code": 500, "detail": f"Error processing Call request in {file_path} at line {line_number}: {str(e)}"}, "status_code": 500}
    
  • src/upsonic/server/level_two/agent.py+0 134 removed
    @@ -1,134 +0,0 @@
    -import traceback
    -import anthropic
    -import openai
    -from pydantic import BaseModel
    -import os
    -from pydantic_ai.messages import ImageUrl
    -
    -from typing import Any, Optional, List
    -
    -from ...storage.configuration import Configuration
    -
    -from ..level_utilized.memory import save_temporary_memory, get_temporary_memory
    -
    -from ..level_utilized.utility import (
    -    agent_creator, 
    -    prepare_message_history,
    -    process_error_traceback,
    -    format_response,
    -    handle_compression_retry
    -)
    -
    -from ...client.tasks.tasks import Task
    -from ...client.tasks.task_response import ObjectResponse
    -
    -from ..level_one.call import Call
    -
    -
    -def extract_latest_tool_usage(all_messages):
    -    """Extract tool usage from the latest interaction only."""
    -    tool_usage = []
    -    current_tool = None
    -    
    -    # Find the start of the latest interaction
    -    latest_interaction_start = 0
    -    for i, msg in enumerate(all_messages):
    -        if msg.kind == 'request' and any(part.part_kind == 'user-prompt' for part in msg.parts):
    -            latest_interaction_start = i
    -            
    -    # Process only messages from the latest interaction
    -    for msg in all_messages[latest_interaction_start:]:
    -        if msg.kind == 'request':
    -            for part in msg.parts:
    -                if part.part_kind == 'tool-return':
    -                    if current_tool and current_tool['tool_name'] != 'final_result':
    -                        current_tool['tool_result'] = part.content
    -                        tool_usage.append(current_tool)
    -                    current_tool = None
    -                    
    -        elif msg.kind == 'response':
    -            for part in msg.parts:
    -                if part.part_kind == 'tool-call' and part.tool_name != 'final_result':
    -                    current_tool = {
    -                        'tool_name': part.tool_name,
    -                        'params': part.args,
    -                        'tool_result': None
    -                    }
    -    
    -    return tool_usage
    -
    -class AgentManager:
    -    async def agent(
    -        self,
    -        agent_id: str,
    -        prompt: str,
    -        images: Optional[List[str]] = None,
    -        response_format: BaseModel = str,
    -        tools: list[str] = [],
    -        context: Any = None,
    -        llm_model: str = "openai/gpt-4o",
    -        system_prompt: Optional[Any] = None,
    -        context_compress: bool = False,
    -        memory: bool = False
    -    ):
    -        try:
    -            roulette_agent = agent_creator(
    -                response_format=response_format, 
    -                tools=tools, 
    -                context=context, 
    -                llm_model=llm_model, 
    -                system_prompt=system_prompt,
    -                context_compress=context_compress
    -            )
    -            
    -            if isinstance(roulette_agent, dict) and "status_code" in roulette_agent:
    -                return roulette_agent  # Return error from agent_creator
    -
    -            agent_memory = []
    -            if memory:
    -                agent_memory = get_temporary_memory(agent_id)
    -
    -            message_history = prepare_message_history(prompt, images, llm_model, tools)
    -                
    -            total_request_tokens = 0
    -            total_response_tokens = 0
    -
    -            try:
    -                result = await roulette_agent.run(message_history, message_history=agent_memory)
    -            except (openai.BadRequestError, anthropic.BadRequestError) as e:
    -                str_e = str(e)
    -                if "400" in str_e and context_compress:
    -                    try:
    -                        result = await handle_compression_retry(
    -                            prompt, images, tools, llm_model,
    -                            response_format, context, system_prompt, agent_memory
    -                        )
    -                    except Exception as e:
    -                        return process_error_traceback(e)
    -                else:
    -                    return process_error_traceback(e)
    -
    -            total_request_tokens += result.usage().request_tokens
    -            total_response_tokens += result.usage().response_tokens
    -
    -            if memory:
    -                save_temporary_memory(result.all_messages(), agent_id)
    -
    -            # Extract tool usage from the latest interaction only
    -            tool_usage = extract_latest_tool_usage(result.all_messages())
    -
    -            return {
    -                "status_code": 200, 
    -                "result": result.data, 
    -                "usage": {
    -                    "input_tokens": total_request_tokens, 
    -                    "output_tokens": total_response_tokens
    -                },
    -                "tool_usage": tool_usage
    -            }
    -
    -        except Exception as e:
    -            return process_error_traceback(e)
    -
    -
    -Agent = AgentManager()
    \ No newline at end of file
    
  • src/upsonic/server/level_two/server/server.py+0 94 removed
    @@ -1,94 +0,0 @@
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -from typing import List, Dict, Any, Optional, Union
    -import traceback
    -
    -import pydantic_ai
    -from ...api import app, timeout, handle_server_errors
    -from ..agent import Agent
    -import asyncio
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -
    -
    -prefix = "/level_two"
    -
    -
    -class AgentRequest(BaseModel):
    -    agent_id: str
    -    prompt: str
    -    images: Optional[List[str]] = None
    -    response_format: Optional[Any] = []
    -    tools: Optional[Any] = []
    -    context: Optional[Any] = None
    -    llm_model: Optional[Any] = "openai/gpt-4o"
    -    system_prompt: Optional[Any] = None
    -    context_compress: Optional[Any] = False
    -    memory: Optional[Any] = False
    -
    -
    -@app.post(f"{prefix}/agent")
    -@handle_server_errors
    -async def call_agent(request: AgentRequest):
    -    """
    -    Endpoint to call GPT-4 with optional tools and MCP servers.
    -
    -    Args:
    -        request: GPT4ORequest containing prompt and optional parameters
    -
    -    Returns:
    -        The response from the AI model
    -    """
    -    try:
    -        # Handle pickled response format
    -        if request.response_format != "str":
    -            try:
    -                # Decode and unpickle the response format
    -                pickled_data = base64.b64decode(request.response_format)
    -                response_format = cloudpickle.loads(pickled_data)
    -            except Exception as e:
    -                # Fallback to basic type mapping if unpickling fails
    -                type_mapping = {
    -                    "str": str,
    -                    "int": int,
    -                    "float": float,
    -                    "bool": bool,
    -                }
    -                response_format = type_mapping.get(request.response_format, str)
    -        else:
    -            response_format = str
    -
    -        if request.context is not None:
    -            try:
    -                pickled_context = base64.b64decode(request.context)
    -                context = cloudpickle.loads(pickled_context)
    -            except Exception as e:
    -                context = None
    -        else:
    -            context = None
    -
    -        result = await Agent.agent(
    -            agent_id=request.agent_id,
    -            prompt=request.prompt,
    -            images=request.images,
    -            response_format=response_format,
    -            tools=request.tools,
    -            context=context,
    -            llm_model=request.llm_model,
    -            system_prompt=request.system_prompt,
    -            context_compress=request.context_compress,
    -            memory=request.memory
    -        )
    -
    -        if request.response_format != "str" and result["status_code"] == 200:
    -            result["result"] = cloudpickle.dumps(result["result"])
    -            result["result"] = base64.b64encode(result["result"]).decode('utf-8')
    -        return {"result": result, "status_code": 200}
    -
    -    except pydantic_ai.exceptions.UnexpectedModelBehavior as e:
    -        return {"result": {"status_code": 500, "detail": f"Change your response format to a simple format or improve your task description. Your response format is too hard for the model to understand (Dont use 'dict' like things in your response format, define everything explicitly). Try to make it more small parts.", "status_code": 500}}
    -
    -    except Exception as e:
    -        traceback.print_exc()
    -        return {"result": {"status_code": 500, "detail": f"Error processing Agent request: {str(e)}"}, "status_code": 500}
    
  • src/upsonic/server/level_utilized/bu/browseruse.py+0 231 removed
    @@ -1,231 +0,0 @@
    -from dotenv import load_dotenv
    -load_dotenv()
    -
    -from ....storage.configuration import Configuration
    -
    -import asyncio
    -import atexit
    -
    -
    -
    -class BrowserManager:
    -    _instance = None
    -    _browser = None
    -    _loop = None
    -
    -    @classmethod
    -    def get_instance(cls):
    -        if cls._instance is None:
    -            cls._instance = cls()
    -            # Register the cleanup function
    -            atexit.register(cls._cleanup)
    -        return cls._instance
    -
    -    @classmethod
    -    async def initialize(cls):
    -        instance = cls.get_instance()
    -        if instance._browser is None:
    -            from browser_use import Browser
    -            browser = Browser()
    -            instance._browser = browser
    -            instance._loop = asyncio.get_event_loop()
    -            return browser
    -        return instance._browser
    -
    -    @classmethod
    -    async def get_context(cls):
    -        """Get a new browser context for isolation"""
    -        instance = cls.get_instance()
    -        if instance._browser:
    -            context = await instance._browser.new_context()
    -            return context
    -        return None
    -
    -    @classmethod
    -    def _cleanup(cls):
    -        """Cleanup function that will be called when the Python process exits"""
    -        instance = cls.get_instance()
    -        if instance._browser and instance._loop:
    -            # Create a new event loop if the main one is closed
    -            try:
    -                loop = instance._loop if instance._loop.is_running() else asyncio.new_event_loop()
    -                asyncio.set_event_loop(loop)
    -                loop.run_until_complete(instance._browser.close())
    -            except:
    -                pass  # Suppress any errors during shutdown
    -
    -    @classmethod
    -    async def close(cls):
    -        """Manual close method - only use if you explicitly need to close the browser"""
    -        instance = cls.get_instance()
    -        if instance._browser:
    -            await instance._browser.close()
    -            instance._browser = None
    -
    -    @classmethod
    -    def get_browser(cls):
    -        instance = cls.get_instance()
    -        return instance._browser
    -
    -
    -class LLMManager:
    -    _instance = None
    -    _llm_model = None
    -
    -    @classmethod
    -    def get_instance(cls):
    -        if cls._instance is None:
    -            cls._instance = cls()
    -        return cls._instance
    -
    -    @classmethod
    -    def set_model(cls, model):
    -        instance = cls.get_instance()
    -        instance._llm_model = model
    -        print("SETTING THE LLM MODEL TO:", model)
    -
    -    @classmethod
    -    def get_model(cls):
    -        instance = cls.get_instance()
    -        print("GETTING THE LLM MODEL:", instance._llm_model)
    -        return instance._llm_model
    -
    -
    -def get_llm():
    -    llm_model = LLMManager.get_model()
    -    print("THE LLM MODEL IS", llm_model)
    -    
    -    if not llm_model:
    -        raise ValueError("LLM model not set before calling get_llm()")
    -    
    -    # Map our model names to standard model names
    -    openai_model_mapping = {
    -        "openai/gpt-4o": "gpt-4o",
    -        "gpt-4o": "gpt-4o",
    -        "openai/o3-mini": "o3-mini",
    -        "openai/gpt-4o-mini": "gpt-4o",
    -        "azure/gpt-4o": "gpt-4o",
    -        "azure/gpt-4o-mini": "gpt-4o-mini",
    -        "gpt-4o-azure": "gpt-4o"
    -    }
    -
    -    claude_model_mapping = {
    -        "claude/claude-3-5-sonnet": "claude-3-5-sonnet-latest",
    -        "claude-3-5-sonnet": "claude-3-5-sonnet-latest",
    -        "bedrock/claude-3-5-sonnet": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
    -        "claude-3-5-sonnet-aws": "us.anthropic.claude-3-5-sonnet-20241022-v2:0"
    -    }
    -
    -    deepseek_model_mapping = {
    -        "deepseek/deepseek-chat": "deepseek-chat"
    -    }
    -    
    -    # Handle Azure OpenAI
    -    if llm_model in ["azure/gpt-4o", "gpt-4o-azure", "azure/gpt-4o-mini"]:
    -        azure_endpoint = Configuration.get("AZURE_OPENAI_ENDPOINT")
    -        azure_api_key = Configuration.get("AZURE_OPENAI_API_KEY")
    -        azure_api_version = Configuration.get("AZURE_OPENAI_API_VERSION", "2024-10-21")
    -        
    -
    -            
    -        from langchain_openai import AzureChatOpenAI
    -        llm = AzureChatOpenAI(
    -            model="gpt-4o",
    -            api_version=azure_api_version,
    -            azure_endpoint=azure_endpoint,
    -            api_key=azure_api_key
    -        )
    -    
    -    # Handle regular OpenAI
    -    elif llm_model in openai_model_mapping:
    -        openai_api_key = Configuration.get("OPENAI_API_KEY")
    -        if not openai_api_key:
    -            raise ValueError("OpenAI API key not found in configuration")
    -        from langchain_openai import ChatOpenAI
    -        llm = ChatOpenAI(
    -            model_name=openai_model_mapping[llm_model],
    -            openai_api_key=openai_api_key,
    -        )
    -
    -    # Handle Claude (Anthropic)
    -    elif llm_model in claude_model_mapping:
    -        if llm_model in ["bedrock/claude-3-5-sonnet", "claude-3-5-sonnet-aws"]:
    -            # AWS Bedrock configuration
    -            aws_access_key_id = Configuration.get("AWS_ACCESS_KEY_ID")
    -            aws_secret_access_key = Configuration.get("AWS_SECRET_ACCESS_KEY")
    -            aws_region = Configuration.get("AWS_REGION")
    -            
    -            if not all([aws_access_key_id, aws_secret_access_key, aws_region]):
    -                raise ValueError("AWS credentials not found in configuration")
    -            from langchain_community.chat_models import BedrockChat
    -            llm = BedrockChat(
    -                model_id=claude_model_mapping[llm_model],
    -                credentials_profile_name=None,
    -                region_name=aws_region,
    -                aws_access_key_id=aws_access_key_id,
    -                aws_secret_access_key=aws_secret_access_key
    -            )
    -        else:
    -            # Regular Anthropic configuration
    -            anthropic_api_key = Configuration.get("ANTHROPIC_API_KEY")
    -            if not anthropic_api_key:
    -                raise ValueError("Anthropic API key not found in configuration")
    -            from langchain_anthropic import ChatAnthropic
    -            llm = ChatAnthropic(
    -                model=claude_model_mapping[llm_model],
    -                anthropic_api_key=anthropic_api_key,
    -                temperature=0.0,
    -                timeout=100
    -            )
    -
    -    # Handle DeepSeek models
    -    elif llm_model in deepseek_model_mapping:
    -        deepseek_api_key = Configuration.get("DEEPSEEK_API_KEY")
    -        if not deepseek_api_key:
    -            raise ValueError("DeepSeek API key not found in configuration")
    -            
    -        from langchain_openai import ChatOpenAI
    -        llm = ChatOpenAI(
    -            model_name=deepseek_model_mapping[llm_model],
    -            api_key=deepseek_api_key,
    -            base_url="https://api.deepseek.com/v1"
    -        )
    -    
    -    else:
    -        raise ValueError(f"Unsupported model for browser use: {llm_model}")
    -    
    -    return llm
    -
    -
    -async def BrowserUse__browser_agent(task: str, expected_output: str):
    -    """An AI agent that can browse the web, extract information, and perform actions."""
    -    from browser_use import Agent
    -    
    -    # Get or create the browser instance
    -    browser = await BrowserManager.initialize()
    -    
    -    # Create a new context for this agent run
    -    context = await BrowserManager.get_context()
    -    
    -    # Create the agent with the browser context
    -    agent = Agent(
    -        task=task+"\n\nExpected Output: "+expected_output,
    -        llm=get_llm(),
    -        browser=browser,
    -        browser_context=context,  # Use persistent context
    -        generate_gif=False
    -    )
    -    
    -    try:
    -        result = await agent.run()
    -        return result.final_result()
    -    finally:
    -        # Clean up the context after the agent is done
    -        if context:
    -            await context.close()
    -
    -
    -# List of all browser use tools
    -BrowserUse_tools = [
    -    BrowserUse__browser_agent
    -]
    
  • src/upsonic/server/level_utilized/bu/__init__.py+0 12 removed
    @@ -1,12 +0,0 @@
    -
    -
    -
    -from .browseruse import BrowserUse_tools
    -
    -__ALL__ = [
    -
    -
    -    BrowserUse_tools,
    -
    -
    -]
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/cu/base.py+0 19 removed
    @@ -1,19 +0,0 @@
    -from abc import ABCMeta, abstractmethod
    -from typing import Any
    -
    -from anthropic.types.beta import BetaToolUnionParam
    -
    -
    -class BaseAnthropicTool(metaclass=ABCMeta):
    -    """Abstract base class for Anthropic-defined tools."""
    -
    -    @abstractmethod
    -    def __call__(self, **kwargs) -> Any:
    -        """Executes the tool with the given arguments."""
    -        ...
    -
    -    @abstractmethod
    -    def to_params(
    -        self,
    -    ) -> BetaToolUnionParam:
    -        raise NotImplementedError
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/cu/bash.py+0 144 removed
    @@ -1,144 +0,0 @@
    -import asyncio
    -import os
    -from typing import ClassVar, Literal
    -
    -from anthropic.types.beta import BetaToolBash20241022Param
    -
    -from .base import BaseAnthropicTool, CLIResult, ToolError, ToolResult
    -
    -
    -class _BashSession:
    -    """A session of a bash shell."""
    -
    -    _started: bool
    -    _process: asyncio.subprocess.Process
    -
    -    command: str = "/bin/bash"
    -    _output_delay: float = 0.2  # seconds
    -    _timeout: float = 120.0  # seconds
    -    _sentinel: str = "<<exit>>"
    -
    -    def __init__(self):
    -        self._started = False
    -        self._timed_out = False
    -
    -    async def start(self):
    -        if self._started:
    -            return
    -
    -        self._process = await asyncio.create_subprocess_shell(
    -            self.command,
    -            preexec_fn=os.setsid,
    -            shell=True,
    -            bufsize=0,
    -            stdin=asyncio.subprocess.PIPE,
    -            stdout=asyncio.subprocess.PIPE,
    -            stderr=asyncio.subprocess.PIPE,
    -        )
    -
    -        self._started = True
    -
    -    def stop(self):
    -        """Terminate the bash shell."""
    -        if not self._started:
    -            raise ToolError("Session has not started.")
    -        if self._process.returncode is not None:
    -            return
    -        self._process.terminate()
    -
    -    async def run(self, command: str):
    -        """Execute a command in the bash shell."""
    -        if not self._started:
    -            raise ToolError("Session has not started.")
    -        if self._process.returncode is not None:
    -            return ToolResult(
    -                system="tool must be restarted",
    -                error=f"bash has exited with returncode {self._process.returncode}",
    -            )
    -        if self._timed_out:
    -            raise ToolError(
    -                f"timed out: bash has not returned in {self._timeout} seconds and must be restarted",
    -            )
    -
    -        # we know these are not None because we created the process with PIPEs
    -        assert self._process.stdin
    -        assert self._process.stdout
    -        assert self._process.stderr
    -
    -        # send command to the process
    -        self._process.stdin.write(
    -            command.encode() + f"; echo '{self._sentinel}'\n".encode()
    -        )
    -        await self._process.stdin.drain()
    -
    -        # read output from the process, until the sentinel is found
    -        try:
    -            async with asyncio.timeout(self._timeout):
    -                while True:
    -                    await asyncio.sleep(self._output_delay)
    -                    # if we read directly from stdout/stderr, it will wait forever for
    -                    # EOF. use the StreamReader buffer directly instead.
    -                    output = self._process.stdout._buffer.decode()  # pyright: ignore[reportAttributeAccessIssue]
    -                    if self._sentinel in output:
    -                        # strip the sentinel and break
    -                        output = output[: output.index(self._sentinel)]
    -                        break
    -        except asyncio.TimeoutError:
    -            self._timed_out = True
    -            raise ToolError(
    -                f"timed out: bash has not returned in {self._timeout} seconds and must be restarted",
    -            ) from None
    -
    -        if output.endswith("\n"):
    -            output = output[:-1]
    -
    -        error = self._process.stderr._buffer.decode()  # pyright: ignore[reportAttributeAccessIssue]
    -        if error.endswith("\n"):
    -            error = error[:-1]
    -
    -        # clear the buffers so that the next output can be read correctly
    -        self._process.stdout._buffer.clear()  # pyright: ignore[reportAttributeAccessIssue]
    -        self._process.stderr._buffer.clear()  # pyright: ignore[reportAttributeAccessIssue]
    -
    -        return CLIResult(output=output, error=error)
    -
    -
    -class BashTool(BaseAnthropicTool):
    -    """
    -    A tool that allows the agent to run bash commands.
    -    The tool parameters are defined by Anthropic and are not editable.
    -    """
    -
    -    _session: _BashSession | None
    -    name: ClassVar[Literal["bash"]] = "bash"
    -    api_type: ClassVar[Literal["bash_20241022"]] = "bash_20241022"
    -
    -    def __init__(self):
    -        self._session = None
    -        super().__init__()
    -
    -    async def __call__(
    -        self, command: str | None = None, restart: bool = False, **kwargs
    -    ):
    -        if restart:
    -            if self._session:
    -                self._session.stop()
    -            self._session = _BashSession()
    -            await self._session.start()
    -
    -            return ToolResult(system="tool has been restarted.")
    -
    -        if self._session is None:
    -            self._session = _BashSession()
    -            await self._session.start()
    -
    -        if command is not None:
    -            return await self._session.run(command)
    -
    -        raise ToolError("no command provided.")
    -
    -    def to_params(self) -> BetaToolBash20241022Param:
    -        return {
    -            "type": self.api_type,
    -            "name": self.name,
    -        }
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/cu/collection.py+0 34 removed
    @@ -1,34 +0,0 @@
    -"""Collection classes for managing multiple tools."""
    -
    -from typing import Any
    -
    -from anthropic.types.beta import BetaToolUnionParam
    -
    -from .base import (
    -    BaseAnthropicTool,
    -    ToolError,
    -    ToolFailure,
    -    ToolResult,
    -)
    -
    -
    -class ToolCollection:
    -    """A collection of anthropic-defined tools."""
    -
    -    def __init__(self, *tools: BaseAnthropicTool):
    -        self.tools = tools
    -        self.tool_map = {tool.to_params()["name"]: tool for tool in tools}
    -
    -    def to_params(
    -        self,
    -    ) -> list[BetaToolUnionParam]:
    -        return [tool.to_params() for tool in self.tools]
    -
    -    async def run(self, *, name: str, tool_input: dict[str, Any]) -> ToolResult:
    -        tool = self.tool_map.get(name)
    -        if not tool:
    -            return ToolFailure(error=f"Tool {name} is invalid")
    -        try:
    -            return await tool(**tool_input)
    -        except ToolError as e:
    -            return ToolFailure(error=e.message)
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/cu/computer.py+0 465 removed
    @@ -1,465 +0,0 @@
    -import asyncio
    -import base64
    -import math
    -import os
    -import platform
    -import shlex
    -import shutil
    -import tempfile
    -import time
    -from enum import StrEnum
    -from pathlib import Path
    -from typing import Literal, TypedDict
    -from uuid import uuid4
    -
    -# Add import for PyAutoGUI
    -import pyautogui
    -from anthropic.types.beta import BetaToolComputerUse20241022Param
    -
    -from .base import BaseAnthropicTool
    -from .run import run
    -
    -OUTPUT_DIR = "/tmp/outputs"
    -
    -TYPING_DELAY_MS = 12
    -TYPING_GROUP_SIZE = 50
    -
    -Action = Literal[
    -    "key",
    -    "type",
    -    "mouse_move",
    -    "left_click",
    -    "left_click_drag",
    -    "right_click",
    -    "middle_click",
    -    "double_click",
    -    "screenshot",
    -    "cursor_position",
    -]
    -
    -
    -class Resolution(TypedDict):
    -    width: int
    -    height: int
    -
    -
    -class ScalingMode(StrEnum):
    -    AUTO = "auto"  # Automatically determine best scaling
    -    FIXED = "fixed"  # Use fixed target resolutions
    -    RELATIVE = "relative"  # Scale by percentage
    -    NONE = "none"  # No scaling
    -
    -
    -# Base resolutions for different device categories
    -DEVICE_CATEGORIES: dict[str, Resolution] = {
    -    "HD": Resolution(width=1280, height=720),      # 720p
    -}
    -
    -class ScalingConfig(TypedDict):
    -    mode: ScalingMode
    -    target_resolution: Resolution | None  # Used for FIXED mode
    -    scale_factor: float | None  # Used for RELATIVE mode
    -    min_scale: float  # Minimum scaling factor
    -    max_scale: float  # Maximum scaling factor
    -    preserve_aspect_ratio: bool
    -
    -
    -DEFAULT_SCALING_CONFIG = ScalingConfig(
    -    mode=ScalingMode.RELATIVE,
    -    target_resolution=None,
    -    scale_factor=1.0,  # No scaling by default for HD resolution
    -    min_scale=0.25,
    -    max_scale=1.0,
    -    preserve_aspect_ratio=True
    -)
    -
    -class ScalingSource(StrEnum):
    -    COMPUTER = "computer"
    -    API = "api"
    -
    -
    -class ComputerToolOptions(TypedDict):
    -    display_height_px: int
    -    display_width_px: int
    -    display_number: int | None
    -
    -
    -def chunks(s: str, chunk_size: int) -> list[str]:
    -    return [s[i : i + chunk_size] for i in range(0, len(s), chunk_size)]
    -
    -
    -def smooth_move_to(x, y, duration=1.2):
    -    start_x, start_y = pyautogui.position()
    -    dx = x - start_x
    -    dy = y - start_y
    -    distance = math.hypot(dx, dy)  # Calculate the distance in pixels
    -
    -    start_time = time.time()
    -
    -    while True:
    -        elapsed_time = time.time() - start_time
    -        if elapsed_time > duration:
    -            break
    -
    -        t = elapsed_time / duration
    -        eased_t = (1 - math.cos(t * math.pi)) / 2  # easeInOutSine function
    -
    -        target_x = start_x + dx * eased_t
    -        target_y = start_y + dy * eased_t
    -        pyautogui.moveTo(target_x, target_y)
    -
    -    # Ensure the mouse ends up exactly at the target (x, y)
    -    pyautogui.moveTo(x, y)
    -
    -
    -class ComputerTool(BaseAnthropicTool):
    -    """
    -    A tool that allows the agent to interact with the primary monitor's screen, keyboard, and mouse.
    -    The tool parameters are defined by Anthropic and are not editable.
    -    """
    -
    -    name: Literal["computer"] = "computer"
    -    api_type: Literal["computer_20241022"] = "computer_20241022"
    -    width: int
    -    height: int
    -    display_num: None
    -    scaling_config: ScalingConfig
    -
    -    _screenshot_delay = 2.0
    -    _scaling_enabled = True
    -
    -    @property
    -    def options(self) -> ComputerToolOptions:
    -        width, height = self.scale_coordinates(
    -            ScalingSource.COMPUTER, self.width, self.height
    -        )
    -        return {
    -            "display_width_px": width,
    -            "display_height_px": height,
    -            "display_number": self.display_num,
    -        }
    -
    -    def to_params(self) -> BetaToolComputerUse20241022Param:
    -        return {"name": self.name, "type": self.api_type, **self.options}
    -
    -    def __init__(self):
    -        super().__init__()
    -        self.width, self.height = pyautogui.size()
    -        self.display_num = None
    -        self.scaling_config = self._determine_optimal_scaling()
    -
    -    def _determine_optimal_scaling(self) -> ScalingConfig:
    -        """Determine the optimal scaling configuration for HD displays (1280x720)."""
    -        config = DEFAULT_SCALING_CONFIG.copy()
    -        
    -        # Get current resolution
    -        print(f"Current resolution: {self.width}x{self.height}")
    -        
    -        # For 720p displays, use simple 1:1 scaling (no scaling)
    -        if abs(self.width - 1280) <= 100 and abs(self.height - 720) <= 100:
    -            config["mode"] = ScalingMode.RELATIVE
    -            config["scale_factor"] = 1.0
    -            print("Using 1:1 scaling for HD (720p) display")
    -        else:
    -            # For any other resolution, scale to match HD
    -            config["mode"] = ScalingMode.FIXED
    -            config["target_resolution"] = DEVICE_CATEGORIES["HD"]
    -            print(f"Scaling to target HD resolution: 1280x720")
    -        
    -        # Print final configuration
    -        print(f"Scaling config: mode={config['mode']}, scale_factor={config['scale_factor']}")
    -        return config
    -
    -    def scale_coordinates(self, source: ScalingSource, x: int, y: int) -> tuple[int, int]:
    -        """Scale coordinates based on the current scaling configuration."""
    -        if not self._scaling_enabled:
    -            return x, y
    -
    -        if self.scaling_config["mode"] == ScalingMode.NONE:
    -            return x, y
    -
    -        # Get current scaling factors
    -        x_factor = 1.0
    -        y_factor = 1.0
    -
    -        if self.scaling_config["mode"] == ScalingMode.FIXED and self.scaling_config["target_resolution"]:
    -            # Fixed mode - scale to target resolution
    -            x_factor = self.scaling_config["target_resolution"]["width"] / self.width
    -            y_factor = self.scaling_config["target_resolution"]["height"] / self.height
    -            
    -            if self.scaling_config["preserve_aspect_ratio"]:
    -                # Use the same factor for both dimensions to preserve aspect ratio
    -                x_factor = y_factor = min(x_factor, y_factor)
    -        
    -        elif self.scaling_config["mode"] == ScalingMode.RELATIVE and self.scaling_config["scale_factor"] is not None:
    -            # Relative mode - use scale factor directly
    -            x_factor = y_factor = self.scaling_config["scale_factor"]
    -        
    -        # Apply scaling based on source
    -        if source == ScalingSource.API:
    -            # Scale up (from scaled coordinates to actual screen coordinates)
    -            return round(x / x_factor), round(y / y_factor)
    -        else:
    -            # Scale down (from actual screen coordinates to scaled coordinates)
    -            return round(x * x_factor), round(y * y_factor)
    -
    -    def update_scaling_config(self, new_config: dict) -> None:
    -        """Update the scaling configuration with new settings."""
    -        self.scaling_config.update(new_config)
    -
    -    async def __call__(
    -        self,
    -        *,
    -        action: Action,
    -        text: str | None = None,
    -        coordinate: tuple[int, int] | None = None,
    -        **kwargs,
    -    ):
    -        print("action", action)
    -        print("text", text)
    -        print("coordinate", coordinate)
    -        if action in ("mouse_move", "left_click_drag"):
    -            if coordinate is None:
    -                return {"text": f"coordinate is required for {action}"}
    -            x, y = self.scale_coordinates(
    -                ScalingSource.API, coordinate[0], coordinate[1]
    -            )
    -
    -            if action == "mouse_move":
    -                smooth_move_to(x, y)
    -                return {"text": f"Mouse moved to X={x}, Y={y}"}
    -            elif action == "left_click_drag":
    -                smooth_move_to(x, y)
    -                pyautogui.dragTo(x, y, button="left")
    -                return {"text": f"Mouse dragged to X={x}, Y={y}"}
    -
    -        elif action in ("key", "type"):
    -            if text is None:
    -                return {"text": f"text is required for {action}"}
    -
    -            if action == "key":
    -                if platform.system() == "Darwin":  # Check if we're on macOS
    -                    text = text.replace("super+", "command+")
    -
    -                # Normalize key names
    -                def normalize_key(key):
    -                    key = key.lower().replace("_", "")
    -                    key_map = {
    -                        "pagedown": "pgdn",
    -                        "pageup": "pgup",
    -                        "enter": "return",
    -                        "return": "enter",
    -                    }
    -                    return key_map.get(key, key)
    -
    -                keys = [normalize_key(k) for k in text.split("+")]
    -
    -                if len(keys) > 1:
    -                    if "darwin" in platform.system().lower():
    -                        # Use AppleScript for hotkey on macOS
    -                        keystroke, modifier = (keys[-1], "+".join(keys[:-1]))
    -                        modifier = modifier.lower() + " down"
    -                        if keystroke.lower() == "space":
    -                            keystroke = " "
    -                        elif keystroke.lower() == "enter":
    -                            keystroke = "\n"
    -                        script = f"""
    -                        tell application "System Events"
    -                            keystroke "{keystroke}" using {modifier}
    -                        end tell
    -                        """
    -                        os.system("osascript -e '{}'".format(script))
    -                    else:
    -                        pyautogui.hotkey(*keys)
    -                else:
    -                    pyautogui.press(keys[0])
    -                return {"text": f"Key pressed: {text}"}
    -            elif action == "type":
    -                pyautogui.write(text, interval=TYPING_DELAY_MS / 1000)
    -                return {"text": f"Text typed: {text}"}
    -
    -        elif action in ("left_click", "right_click", "double_click", "middle_click"):
    -            time.sleep(0.1)
    -            button = {
    -                "left_click": "left",
    -                "right_click": "right",
    -                "middle_click": "middle",
    -            }
    -            if action == "double_click":
    -                pyautogui.click()
    -                time.sleep(0.1)
    -                pyautogui.click()
    -                return {"text": "Double click performed"}
    -            else:
    -                pyautogui.click(button=button.get(action, "left"))
    -                return {"text": f"{action.replace('_', ' ').title()} performed"}
    -
    -        elif action == "screenshot":
    -            screenshot_result = self.screenshot()
    -            return {"type": "image", "source": screenshot_result["source"]}
    -
    -        elif action == "cursor_position":
    -            x, y = pyautogui.position()
    -            x, y = self.scale_coordinates(ScalingSource.COMPUTER, x, y)
    -            return {"text": f"X={x},Y={y}"}
    -
    -        else:
    -            return {"text": f"Invalid action: {action}"}
    -
    -        # Take a screenshot after the action (except for cursor_position)
    -        if action != "cursor_position":
    -            screenshot_result = self.screenshot()
    -            return {
    -                "type": "image",
    -                "text": f"Action '{action}' completed",
    -                "source": screenshot_result["source"]
    -            }
    -
    -    def screenshot(self, return_bytes=False):
    -        """Take a screenshot of the current screen and return the base64 encoded image."""
    -        temp_dir = Path(tempfile.gettempdir())
    -        path = temp_dir / f"screenshot_{uuid4().hex}.png"
    -
    -        screenshot = pyautogui.screenshot()
    -        
    -        # Save original screenshot
    -        screenshot.save(str(path))
    -        print(f"Original file size: {os.path.getsize(path)} bytes")
    -
    -        # Only apply scaling if enabled and necessary
    -        if self._scaling_enabled and self.scaling_config["mode"] != ScalingMode.NONE:
    -            from PIL import Image
    -
    -            # Get target dimensions
    -            if self.scaling_config["mode"] == ScalingMode.FIXED and self.scaling_config["target_resolution"]:
    -                # Fixed mode - use target resolution directly
    -                target_width = self.scaling_config["target_resolution"]["width"]
    -                target_height = self.scaling_config["target_resolution"]["height"]
    -            else:
    -                # Use relative scaling
    -                scale = self.scaling_config.get("scale_factor", 1.0)
    -                target_width = int(self.width * scale)
    -                target_height = int(self.height * scale)
    -            
    -            # Only resize if dimensions are different
    -            if target_width != self.width or target_height != self.height:
    -                print(f"Resizing screenshot to {target_width}x{target_height}")
    -                with Image.open(path) as img:
    -                    # Resize with high-quality downsampling
    -                    img = img.resize((target_width, target_height), Image.Resampling.LANCZOS)
    -                    # Save with optimization
    -                    img.save(path, optimize=True, quality=90)
    -
    -        if path.exists():
    -            print(f"Final screenshot size: {os.path.getsize(path)} bytes")
    -            if return_bytes:
    -                return path.read_bytes()
    -            else:
    -                base64_image = base64.b64encode(path.read_bytes()).decode()
    -                path.unlink()  # Remove the temporary file
    -
    -            return {
    -                "type": "image",
    -                "source": {
    -                    "type": "base64",
    -                    "media_type": "image/png",
    -                    "data": base64_image,
    -                }
    -            }
    -                
    -        return {"text": "Failed to take screenshot"}
    -
    -    async def shell(self, command: str, take_screenshot=True):
    -        """Run a shell command and return the output, error, and optionally a screenshot."""
    -        _, stdout, stderr = await run(command)
    -        result = {"text": stdout}
    -        if stderr:
    -            result["text"] += f"\nError: {stderr}"
    -
    -        if take_screenshot:
    -            # delay to let things settle before taking a screenshot
    -            await asyncio.sleep(self._screenshot_delay)
    -            screenshot_result = await self.screenshot()
    -            result = {
    -                "type": "image",
    -                "text": result["text"],
    -                "source": screenshot_result["source"]
    -            }
    -
    -        return result
    -
    -
    -async def ComputerUse__type(text: str):
    -    """Execute a typing action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="type", text=text)
    -
    -async def ComputerUse__key(text: str):
    -    """Execute a key press action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="key", text=text)
    -
    -async def ComputerUse__mouse_move(coordinate: list[int, int]):
    -    """Execute a mouse move action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="mouse_move", coordinate=coordinate)
    -
    -async def ComputerUse__left_click():
    -    """Execute a left click action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="left_click")
    -
    -async def ComputerUse__right_click():
    -    """Execute a right click action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="right_click")
    -
    -async def ComputerUse__middle_click():
    -    """Execute a middle click action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="middle_click")
    -
    -async def ComputerUse__double_click():
    -    """Execute a double click action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="double_click")
    -
    -async def ComputerUse__left_click_drag(coordinate: list[int, int]):
    -    """Execute a left click drag action using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="left_click_drag", coordinate=coordinate)
    -
    -async def ComputerUse__screenshot():
    -    """Take a screenshot using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="screenshot")
    -
    -def ComputerUse_screenshot_tool():
    -    """Take a screenshot using the ComputerTool and return the base64 encoded image."""
    -    tool = ComputerTool()
    -    return tool.screenshot()
    -
    -def ComputerUse_screenshot_tool_bytes():
    -    """Take a screenshot using the ComputerTool and return the bytes."""
    -    tool = ComputerTool()
    -    return tool.screenshot(return_bytes=True)
    -
    -async def ComputerUse__cursor_position():
    -    """Get the current cursor position using the ComputerTool."""
    -    tool = ComputerTool()
    -    return await tool(action="cursor_position")
    -
    -# List of all computer use tools
    -ComputerUse_tools = [
    -    ComputerUse__type,
    -    ComputerUse__key,
    -    ComputerUse__mouse_move,
    -    ComputerUse__left_click,
    -    ComputerUse__right_click,
    -    ComputerUse__middle_click,
    -    ComputerUse__double_click,
    -    ComputerUse__left_click_drag,
    -    ComputerUse__screenshot,
    -    ComputerUse__cursor_position
    -]
    -
    
  • src/upsonic/server/level_utilized/cu/edit.py+0 290 removed
    @@ -1,290 +0,0 @@
    -from collections import defaultdict
    -from pathlib import Path
    -from typing import Literal, get_args
    -
    -from anthropic.types.beta import BetaToolTextEditor20241022Param
    -
    -from .base import BaseAnthropicTool, CLIResult, ToolError, ToolResult
    -from .run import maybe_truncate, run
    -
    -Command = Literal[
    -    "view",
    -    "create",
    -    "str_replace",
    -    "insert",
    -    "undo_edit",
    -]
    -SNIPPET_LINES: int = 4
    -
    -
    -class EditTool(BaseAnthropicTool):
    -    """
    -    An filesystem editor tool that allows the agent to view, create, and edit files.
    -    The tool parameters are defined by Anthropic and are not editable.
    -    """
    -
    -    api_type: Literal["text_editor_20241022"] = "text_editor_20241022"
    -    name: Literal["str_replace_editor"] = "str_replace_editor"
    -
    -    _file_history: dict[Path, list[str]]
    -
    -    def __init__(self):
    -        self._file_history = defaultdict(list)
    -        super().__init__()
    -
    -    def to_params(self) -> BetaToolTextEditor20241022Param:
    -        return {
    -            "name": self.name,
    -            "type": self.api_type,
    -        }
    -
    -    async def __call__(
    -        self,
    -        *,
    -        command: Command,
    -        path: str,
    -        file_text: str | None = None,
    -        view_range: list[int] | None = None,
    -        old_str: str | None = None,
    -        new_str: str | None = None,
    -        insert_line: int | None = None,
    -        **kwargs,
    -    ):
    -        _path = Path(path)
    -        self.validate_path(command, _path)
    -        if command == "view":
    -            return await self.view(_path, view_range)
    -        elif command == "create":
    -            if file_text is None:
    -                raise ToolError("Parameter `file_text` is required for command: create")
    -            self.write_file(_path, file_text)
    -            self._file_history[_path].append(file_text)
    -            return ToolResult(output=f"File created successfully at: {_path}")
    -        elif command == "str_replace":
    -            if old_str is None:
    -                raise ToolError(
    -                    "Parameter `old_str` is required for command: str_replace"
    -                )
    -            return self.str_replace(_path, old_str, new_str)
    -        elif command == "insert":
    -            if insert_line is None:
    -                raise ToolError(
    -                    "Parameter `insert_line` is required for command: insert"
    -                )
    -            if new_str is None:
    -                raise ToolError("Parameter `new_str` is required for command: insert")
    -            return self.insert(_path, insert_line, new_str)
    -        elif command == "undo_edit":
    -            return self.undo_edit(_path)
    -        raise ToolError(
    -            f'Unrecognized command {command}. The allowed commands for the {self.name} tool are: {", ".join(get_args(Command))}'
    -        )
    -
    -    def validate_path(self, command: str, path: Path):
    -        """
    -        Check that the path/command combination is valid.
    -        """
    -        # Check if its an absolute path
    -        if not path.is_absolute():
    -            suggested_path = Path("") / path
    -            raise ToolError(
    -                f"The path {path} is not an absolute path, it should start with `/`. Maybe you meant {suggested_path}?"
    -            )
    -        # Check if path exists
    -        if not path.exists() and command != "create":
    -            raise ToolError(
    -                f"The path {path} does not exist. Please provide a valid path."
    -            )
    -        if path.exists() and command == "create":
    -            raise ToolError(
    -                f"File already exists at: {path}. Cannot overwrite files using command `create`."
    -            )
    -        # Check if the path points to a directory
    -        if path.is_dir():
    -            if command != "view":
    -                raise ToolError(
    -                    f"The path {path} is a directory and only the `view` command can be used on directories"
    -                )
    -
    -    async def view(self, path: Path, view_range: list[int] | None = None):
    -        """Implement the view command"""
    -        if path.is_dir():
    -            if view_range:
    -                raise ToolError(
    -                    "The `view_range` parameter is not allowed when `path` points to a directory."
    -                )
    -
    -            _, stdout, stderr = await run(
    -                rf"find {path} -maxdepth 2 -not -path '*/\.*'"
    -            )
    -            if not stderr:
    -                stdout = f"Here's the files and directories up to 2 levels deep in {path}, excluding hidden items:\n{stdout}\n"
    -            return CLIResult(output=stdout, error=stderr)
    -
    -        file_content = self.read_file(path)
    -        init_line = 1
    -        if view_range:
    -            if len(view_range) != 2 or not all(isinstance(i, int) for i in view_range):
    -                raise ToolError(
    -                    "Invalid `view_range`. It should be a list of two integers."
    -                )
    -            file_lines = file_content.split("\n")
    -            n_lines_file = len(file_lines)
    -            init_line, final_line = view_range
    -            if init_line < 1 or init_line > n_lines_file:
    -                raise ToolError(
    -                    f"Invalid `view_range`: {view_range}. Its first element `{init_line}` should be within the range of lines of the file: {[1, n_lines_file]}"
    -                )
    -            if final_line > n_lines_file:
    -                raise ToolError(
    -                    f"Invalid `view_range`: {view_range}. Its second element `{final_line}` should be smaller than the number of lines in the file: `{n_lines_file}`"
    -                )
    -            if final_line != -1 and final_line < init_line:
    -                raise ToolError(
    -                    f"Invalid `view_range`: {view_range}. Its second element `{final_line}` should be larger or equal than its first `{init_line}`"
    -                )
    -
    -            if final_line == -1:
    -                file_content = "\n".join(file_lines[init_line - 1 :])
    -            else:
    -                file_content = "\n".join(file_lines[init_line - 1 : final_line])
    -
    -        return CLIResult(
    -            output=self._make_output(file_content, str(path), init_line=init_line)
    -        )
    -
    -    def str_replace(self, path: Path, old_str: str, new_str: str | None):
    -        """Implement the str_replace command, which replaces old_str with new_str in the file content"""
    -        # Read the file content
    -        file_content = self.read_file(path).expandtabs()
    -        old_str = old_str.expandtabs()
    -        new_str = new_str.expandtabs() if new_str is not None else ""
    -
    -        # Check if old_str is unique in the file
    -        occurrences = file_content.count(old_str)
    -        if occurrences == 0:
    -            raise ToolError(
    -                f"No replacement was performed, old_str `{old_str}` did not appear verbatim in {path}."
    -            )
    -        elif occurrences > 1:
    -            file_content_lines = file_content.split("\n")
    -            lines = [
    -                idx + 1
    -                for idx, line in enumerate(file_content_lines)
    -                if old_str in line
    -            ]
    -            raise ToolError(
    -                f"No replacement was performed. Multiple occurrences of old_str `{old_str}` in lines {lines}. Please ensure it is unique"
    -            )
    -
    -        # Replace old_str with new_str
    -        new_file_content = file_content.replace(old_str, new_str)
    -
    -        # Write the new content to the file
    -        self.write_file(path, new_file_content)
    -
    -        # Save the content to history
    -        self._file_history[path].append(file_content)
    -
    -        # Create a snippet of the edited section
    -        replacement_line = file_content.split(old_str)[0].count("\n")
    -        start_line = max(0, replacement_line - SNIPPET_LINES)
    -        end_line = replacement_line + SNIPPET_LINES + new_str.count("\n")
    -        snippet = "\n".join(new_file_content.split("\n")[start_line : end_line + 1])
    -
    -        # Prepare the success message
    -        success_msg = f"The file {path} has been edited. "
    -        success_msg += self._make_output(
    -            snippet, f"a snippet of {path}", start_line + 1
    -        )
    -        success_msg += "Review the changes and make sure they are as expected. Edit the file again if necessary."
    -
    -        return CLIResult(output=success_msg)
    -
    -    def insert(self, path: Path, insert_line: int, new_str: str):
    -        """Implement the insert command, which inserts new_str at the specified line in the file content."""
    -        file_text = self.read_file(path).expandtabs()
    -        new_str = new_str.expandtabs()
    -        file_text_lines = file_text.split("\n")
    -        n_lines_file = len(file_text_lines)
    -
    -        if insert_line < 0 or insert_line > n_lines_file:
    -            raise ToolError(
    -                f"Invalid `insert_line` parameter: {insert_line}. It should be within the range of lines of the file: {[0, n_lines_file]}"
    -            )
    -
    -        new_str_lines = new_str.split("\n")
    -        new_file_text_lines = (
    -            file_text_lines[:insert_line]
    -            + new_str_lines
    -            + file_text_lines[insert_line:]
    -        )
    -        snippet_lines = (
    -            file_text_lines[max(0, insert_line - SNIPPET_LINES) : insert_line]
    -            + new_str_lines
    -            + file_text_lines[insert_line : insert_line + SNIPPET_LINES]
    -        )
    -
    -        new_file_text = "\n".join(new_file_text_lines)
    -        snippet = "\n".join(snippet_lines)
    -
    -        self.write_file(path, new_file_text)
    -        self._file_history[path].append(file_text)
    -
    -        success_msg = f"The file {path} has been edited. "
    -        success_msg += self._make_output(
    -            snippet,
    -            "a snippet of the edited file",
    -            max(1, insert_line - SNIPPET_LINES + 1),
    -        )
    -        success_msg += "Review the changes and make sure they are as expected (correct indentation, no duplicate lines, etc). Edit the file again if necessary."
    -        return CLIResult(output=success_msg)
    -
    -    def undo_edit(self, path: Path):
    -        """Implement the undo_edit command."""
    -        if not self._file_history[path]:
    -            raise ToolError(f"No edit history found for {path}.")
    -
    -        old_text = self._file_history[path].pop()
    -        self.write_file(path, old_text)
    -
    -        return CLIResult(
    -            output=f"Last edit to {path} undone successfully. {self._make_output(old_text, str(path))}"
    -        )
    -
    -    def read_file(self, path: Path):
    -        """Read the content of a file from a given path; raise a ToolError if an error occurs."""
    -        try:
    -            return path.read_text()
    -        except Exception as e:
    -            raise ToolError(f"Ran into {e} while trying to read {path}") from None
    -
    -    def write_file(self, path: Path, file: str):
    -        """Write the content of a file to a given path; raise a ToolError if an error occurs."""
    -        try:
    -            path.write_text(file)
    -        except Exception as e:
    -            raise ToolError(f"Ran into {e} while trying to write to {path}") from None
    -
    -    def _make_output(
    -        self,
    -        file_content: str,
    -        file_descriptor: str,
    -        init_line: int = 1,
    -        expand_tabs: bool = True,
    -    ):
    -        """Generate output for the CLI based on the content of a file."""
    -        file_content = maybe_truncate(file_content)
    -        if expand_tabs:
    -            file_content = file_content.expandtabs()
    -        file_content = "\n".join(
    -            [
    -                f"{i + init_line:6}\t{line}"
    -                for i, line in enumerate(file_content.split("\n"))
    -            ]
    -        )
    -        return (
    -            f"Here's the result of running `cat -n` on {file_descriptor}:\n"
    -            + file_content
    -            + "\n"
    -        )
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/cu/__init__.py+0 17 removed
    @@ -1,17 +0,0 @@
    -
    -
    -from .computer import ComputerTool
    -
    -from .computer import ComputerUse_tools, ComputerUse_screenshot_tool, ComputerUse_screenshot_tool_bytes
    -
    -__ALL__ = [
    -
    -
    -    ComputerTool,
    -
    -
    -
    -    ComputerUse_tools,
    -    ComputerUse_screenshot_tool,
    -    ComputerUse_screenshot_tool_bytes
    -]
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/cu/run.py+0 42 removed
    @@ -1,42 +0,0 @@
    -"""Utility to run shell commands asynchronously with a timeout."""
    -
    -import asyncio
    -
    -TRUNCATED_MESSAGE: str = "<response clipped><NOTE>To save on context only part of this file has been shown to you. You should retry this tool after you have searched inside the file with `grep -n` in order to find the line numbers of what you are looking for.</NOTE>"
    -MAX_RESPONSE_LEN: int = 16000
    -
    -
    -def maybe_truncate(content: str, truncate_after: int | None = MAX_RESPONSE_LEN):
    -    """Truncate content and append a notice if content exceeds the specified length."""
    -    return (
    -        content
    -        if not truncate_after or len(content) <= truncate_after
    -        else content[:truncate_after] + TRUNCATED_MESSAGE
    -    )
    -
    -
    -async def run(
    -    cmd: str,
    -    timeout: float | None = 120.0,  # seconds
    -    truncate_after: int | None = MAX_RESPONSE_LEN,
    -):
    -    """Run a shell command asynchronously with a timeout."""
    -    process = await asyncio.create_subprocess_shell(
    -        cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
    -    )
    -
    -    try:
    -        stdout, stderr = await asyncio.wait_for(process.communicate(), timeout=timeout)
    -        return (
    -            process.returncode or 0,
    -            maybe_truncate(stdout.decode(), truncate_after=truncate_after),
    -            maybe_truncate(stderr.decode(), truncate_after=truncate_after),
    -        )
    -    except asyncio.TimeoutError as exc:
    -        try:
    -            process.kill()
    -        except ProcessLookupError:
    -            pass
    -        raise TimeoutError(
    -            f"Command '{cmd}' timed out after {timeout} seconds"
    -        ) from exc
    \ No newline at end of file
    
  • src/upsonic/server/level_utilized/memory.py+0 38 removed
    @@ -1,38 +0,0 @@
    -"""
    -Module for handling temporary memory storage of agent messages.
    -"""
    -
    -import pickle
    -import base64
    -from ...storage.configuration import Configuration
    -
    -def save_temporary_memory(messages: list, agent_id: str) -> None:
    -    """
    -    Save messages for a specific agent ID in temporary memory.
    -    
    -    Args:
    -        messages: List of messages to store
    -        agent_id: Unique identifier for the agent
    -    """
    -    # Serialize messages using pickle and base64 encode for storage
    -    serialized_messages = base64.b64encode(pickle.dumps(messages)).decode('utf-8')
    -    Configuration.set(f"temp_memory_{agent_id}", serialized_messages)
    -
    -
    -def get_temporary_memory(agent_id: str) -> list:
    -    """
    -    Retrieve messages for a specific agent ID from temporary memory.
    -    
    -    Args:
    -        agent_id: Unique identifier for the agent
    -        
    -    Returns:
    -        List of messages if found, None if not found
    -    """
    -    serialized_messages = Configuration.get(f"temp_memory_{agent_id}")
    -    if serialized_messages is None:
    -        return None
    -    
    -    # Deserialize messages from base64 encoded pickle
    -    messages = pickle.loads(base64.b64decode(serialized_messages))
    -    return messages
    
  • src/upsonic/server/level_utilized/utility.py+0 643 removed
    @@ -1,643 +0,0 @@
    -import inspect
    -import traceback
    -import types
    -from itertools import chain
    -from pydantic_ai import Agent
    -from pydantic_ai.models.openai import OpenAIModel
    -from pydantic_ai.models.anthropic import AnthropicModel
    -from pydantic_ai.models.gemini import GeminiModel
    -from openai import AsyncOpenAI, NOT_GIVEN
    -from openai import AsyncAzureOpenAI
    -from pydantic_ai.providers.openai import OpenAIProvider
    -from pydantic_ai.providers.anthropic import AnthropicProvider
    -from pydantic_ai.providers.google_gla import GoogleGLAProvider
    -import hashlib
    -from pydantic_ai.messages import ImageUrl
    -from pydantic_ai import BinaryContent
    -
    -from pydantic import BaseModel
    -from fastapi import HTTPException, status
    -from functools import wraps
    -from typing import Any, Callable, Optional, Dict
    -from pydantic_ai import RunContext, Tool
    -from anthropic import AsyncAnthropicBedrock
    -from dataclasses import dataclass
    -from openai.types.chat import ChatCompletion, ChatCompletionChunk
    -from openai.types import chat
    -from collections.abc import AsyncIterator
    -from typing import Literal
    -from openai import AsyncStream
    -
    -
    -from ...storage.configuration import Configuration
    -from ...storage.caching import save_to_cache_with_expiry, get_from_cache_with_expiry
    -
    -from ...tools_server.function_client import FunctionToolManager
    -
    -# Import from the centralized model registry
    -from ...model_registry import (
    -    MODEL_SETTINGS,
    -    MODEL_REGISTRY,
    -    OPENAI_MODELS,
    -    ANTHROPIC_MODELS,
    -    get_model_registry_entry,
    -    get_model_settings,
    -    has_capability
    -)
    -
    -def tool_wrapper(func: Callable) -> Callable:
    -    @wraps(func)
    -    def wrapper(*args: Any, **kwargs: Any) -> Any:
    -        # Log the tool call
    -        tool_name = getattr(func, "__name__", str(func))
    -        
    -        try:
    -            # Call the original function
    -            result = func(*args, **kwargs)
    -
    -            return result
    -        except Exception as e:
    -            print("Tool call failed:", e)
    -            return {"status_code": 500, "detail": f"Tool call failed: {e}"}
    -    
    -    return wrapper
    -
    -def summarize_text(text: str, llm_model: Any, chunk_size: int = 100000, max_size: int = 300000) -> str:
    -    """Base function to summarize any text by splitting into chunks and summarizing each."""
    -    # Return early if text is None or empty
    -    if text is None:
    -        return ""
    -    
    -    if not isinstance(text, str):
    -        try:
    -            text = str(text)
    -        except:
    -            return ""
    -
    -    if not text:
    -        return ""
    -
    -    # If text is already under max_size, return it
    -    if len(text) <= max_size:
    -        return text
    -
    -    # Generate a cache key based on text content and parameters
    -    cache_key = hashlib.md5(f"{text}{llm_model}{chunk_size}{max_size}".encode()).hexdigest()
    -    
    -    # Try to get from cache first
    -    cached_result = get_from_cache_with_expiry(cache_key)
    -    if cached_result is not None:
    -        print("Using cached summary")
    -        return cached_result
    -
    -    # Adjust chunk size based on model
    -    if "gpt" in str(llm_model).lower():
    -        # OpenAI has a 1M character limit, we'll use a much smaller chunk size to be safe
    -        chunk_size = min(chunk_size, 100000)  # 100K per chunk for OpenAI
    -    elif "claude" in str(llm_model).lower():
    -        chunk_size = min(chunk_size, 200000)  # 200K per chunk for Claude
    -    
    -    try:
    -        print(f"Original text length: {len(text)}")
    -        
    -        # If text is extremely long, do an initial aggressive truncation
    -        if len(text) > 2000000:  # If over 2M characters
    -            text = text[:2000000]  # Take first 2M characters
    -            print("Text was extremely long, truncated to 2M characters")
    -        
    -        chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]
    -        print(f"Number of chunks: {len(chunks)}")
    -        
    -        model = agent_creator(response_format=str, tools=[], context=None, llm_model=llm_model, system_prompt=None)
    -        if isinstance(model, dict) and "status_code" in model:
    -            print(f"Error creating model: {model}")
    -            return text[:max_size]
    -        
    -        # Process chunks in smaller batches if there are too many
    -        batch_size = 5
    -        summarized_chunks = []
    -        
    -        for batch_start in range(0, len(chunks), batch_size):
    -            batch_end = min(batch_start + batch_size, len(chunks))
    -            batch = chunks[batch_start:batch_end]
    -            
    -            for i, chunk in enumerate(batch):
    -                chunk_num = batch_start + i + 1
    -                try:
    -                    print(f"Processing chunk {chunk_num}/{len(chunks)}, length: {len(chunk)}")
    -                    
    -                    # Create a more focused prompt for better summarization
    -                    prompt = (
    -                        "Please provide an extremely concise summary of the following text. "
    -                        "Focus only on the most important points and key information. "
    -                        "Be as brief as possible while retaining critical meaning:\n\n"
    -                    )
    -                    
    -                    message = [{"type": "text", "text": prompt + chunk}]
    -                    result = model.run_sync(message)
    -                    
    -                    if result and hasattr(result, 'data') and result.data:
    -                        # Ensure the summary isn't too long
    -                        summary = result.data[:max_size//len(chunks)]
    -                        summarized_chunks.append(summary)
    -                    else:
    -                        print(f"Warning: Empty or invalid result for chunk {chunk_num}")
    -                        # Include a shorter truncated version as fallback
    -                        summarized_chunks.append(chunk[:500] + "...")
    -                except Exception as e:
    -                    print(f"Error summarizing chunk {chunk_num}: {str(e)}")
    -                    # Include a shorter truncated version as fallback
    -                    summarized_chunks.append(chunk[:500] + "...")
    -
    -        # Combine all summarized chunks
    -        combined_summary = "\n\n".join(summarized_chunks)
    -        
    -        # If still too long, recursively summarize with smaller chunks
    -        if len(combined_summary) > max_size:
    -            print(f"Combined summary still too long ({len(combined_summary)} chars), recursively summarizing...")
    -            return summarize_text(
    -                combined_summary, 
    -                llm_model, 
    -                chunk_size=max(5000, chunk_size//4),  # Reduce chunk size more aggressively
    -                max_size=max_size
    -            )
    -            
    -        print(f"Final summary length: {len(combined_summary)}")
    -        
    -        # Cache the result for 1 hour (3600 seconds)
    -        save_to_cache_with_expiry(combined_summary, cache_key, 3600)
    -        
    -        return combined_summary
    -    except Exception as e:
    -        traceback.print_exc()
    -        print(f"Error in summarize_text: {str(e)}")
    -        # If all else fails, return a truncated version
    -        return text[:max_size]
    -
    -def summarize_message_prompt(message_prompt: str, llm_model: Any) -> str:
    -    """Summarizes the message prompt to reduce its length while preserving key information."""
    -    print("\n\n\n****************Summarizing message prompt****************\n\n\n")
    -    if message_prompt is None:
    -        return ""
    -    
    -    try:
    -        # Use a smaller max size for message prompts
    -        max_size = 50000  # 100K for messages
    -        summarized_message_prompt = summarize_text(message_prompt, llm_model, max_size=max_size)
    -        if summarized_message_prompt is None:
    -            return ""
    -        print("Before summarize_message_prompt length: ", len(message_prompt))
    -        print(f"Summarized message prompt length: {len(summarized_message_prompt)}")
    -        return summarized_message_prompt
    -    except Exception as e:
    -        print(f"Error in summarize_message_prompt: {str(e)}")
    -        try:
    -            return str(message_prompt)[:50000] if message_prompt else ""
    -        except:
    -            return ""
    -
    -def summarize_system_prompt(system_prompt: str, llm_model: Any) -> str:
    -    """Summarizes the system prompt to reduce its length while preserving key information."""
    -    print("\n\n\n****************Summarizing system prompt****************\n\n\n")
    -    if system_prompt is None:
    -        return ""
    -    
    -    try:
    -        # Use a smaller max size for system prompts
    -        max_size = 50000  # 100K for system prompts
    -        summarized_system_prompt = summarize_text(system_prompt, llm_model, max_size=max_size)
    -        if summarized_system_prompt is None:
    -            return ""
    -        print("Before summarize_system_prompt length: ", len(system_prompt))
    -        print(f"Summarized system prompt length: {len(summarized_system_prompt)}")
    -        return summarized_system_prompt
    -    except Exception as e:
    -        print(f"Error in summarize_system_prompt: {str(e)}")
    -        try:
    -            return str(system_prompt)[:50000] if system_prompt else ""
    -        except:
    -            return ""
    -
    -def summarize_context_string(context_string: str, llm_model: Any) -> str:
    -    """Summarizes the context string to reduce its length while preserving key information."""
    -    print("\n\n\n****************Summarizing context string****************\n\n\n")
    -    if context_string is None or context_string == "":
    -        return ""
    -    
    -    try:
    -        # Use a smaller max size for context strings
    -        max_size = 50000  # 50K for context strings
    -        summarized_context = summarize_text(context_string, llm_model, max_size=max_size)
    -        if summarized_context is None:
    -            return ""
    -        print("Before summarize_context_string length: ", len(context_string))
    -        print(f"Summarized context string length: {len(summarized_context)}")
    -        return summarized_context
    -    except Exception as e:
    -        print(f"Error in summarize_context_string: {str(e)}")
    -        try:
    -            return str(context_string)[:50000] if context_string else ""
    -        except:
    -            return ""
    -
    -def process_error_traceback(e):
    -    """Extract and format error traceback information consistently."""
    -    tb = traceback.extract_tb(e.__traceback__)
    -    file_path = tb[-1].filename
    -    if "pydantic_ai" in file_path:
    -        return {"status_code": 500, "detail": str(e)}
    -    if "Upsonic/src/" in file_path:
    -        file_path = file_path.split("Upsonic/src/")[1]
    -    line_number = tb[-1].lineno
    -    return {"status_code": 500, "detail": f"Error processing request in {file_path} at line {line_number}: {str(e)}"}
    -
    -def prepare_message_history(prompt, images=None, llm_model=None, tools=None):
    -    """Prepare message history with prompt and images, adding screenshot for models with computer use capability."""
    -    message_history = [prompt]
    -    
    -    if images:
    -        for image in images:
    -            message_history.append(ImageUrl(url=f"data:image/jpeg;base64,{image}"))
    -
    -    # Add screenshot for models with computer_use capability when ComputerUse tools are requested
    -
    -    if llm_model and tools and ("ComputerUse.*" in tools or "Screenshot.*" in tools) and has_capability(llm_model, "computer_use"):
    -        try:
    -            from .cu import ComputerUse_screenshot_tool_bytes
    -            result_of_screenshot = ComputerUse_screenshot_tool_bytes()
    -            message_history.append(BinaryContent(data=result_of_screenshot, media_type='image/png'))
    -            print(f"Added screenshot for model {llm_model} with computer_use capability")
    -        except Exception as e:
    -            print(f"Error adding screenshot for {llm_model}: {e}")
    -            
    -    return message_history
    -
    -def format_response(result):
    -    """Format the successful response consistently."""
    -    messages = result.all_messages()
    -    
    -    # Track tool usage
    -    tool_usage = []
    -    current_tool = None
    -    
    -    for msg in messages:
    -        if msg.kind == 'request':
    -            for part in msg.parts:
    -                if part.part_kind == 'tool-return':
    -                    if current_tool and current_tool['tool_name'] != 'final_result':
    -                        current_tool['tool_result'] = part.content
    -                        tool_usage.append(current_tool)
    -                    current_tool = None
    -                    
    -        elif msg.kind == 'response':
    -            for part in msg.parts:
    -                if part.part_kind == 'tool-call' and part.tool_name != 'final_result':
    -                    current_tool = {
    -                        'tool_name': part.tool_name,
    -                        'params': part.args,
    -                        'tool_result': None
    -                    }
    -
    -    usage = result.usage()
    -    return {
    -        "status_code": 200,
    -        "result": result.data,
    -        "usage": {
    -            "input_tokens": usage.request_tokens,
    -            "output_tokens": usage.response_tokens
    -        },
    -        "tool_usage": tool_usage
    -    }
    -
    -async def handle_compression_retry(prompt, images, tools, llm_model, response_format, context, system_prompt=None, agent_memory=None):
    -    """Handle compression and retry when facing token limit issues."""
    -    try:
    -        # Compress prompts
    -        compressed_system_prompt = summarize_system_prompt(system_prompt, llm_model) if system_prompt else None
    -        compressed_message = summarize_message_prompt(prompt, llm_model)
    -        
    -        # Prepare new message history
    -        message_history = prepare_message_history(compressed_message, images, llm_model, tools)
    -        
    -        # Create new agent with compressed prompts
    -        roulette_agent = agent_creator(
    -            response_format=response_format,
    -            tools=tools,
    -            context=context,
    -            llm_model=llm_model,
    -            system_prompt=compressed_system_prompt,
    -            context_compress=False
    -        )
    -        
    -        # Run the agent with compressed inputs
    -        print("Sending request with compressed prompts")
    -        if agent_memory:
    -            result = await roulette_agent.run(message_history, message_history=agent_memory)
    -        else:
    -            result = await roulette_agent.run(message_history)
    -        print("Received response with compressed prompts")
    -        
    -        return result
    -    except Exception as e:
    -        raise e  # Re-raise for consistent error handling
    -
    -def _create_openai_client(api_key_name="OPENAI_API_KEY"):
    -    """Helper function to create an OpenAI client with the specified API key."""
    -    api_key = Configuration.get(api_key_name)
    -    if not api_key:
    -        return None, {"status_code": 401, "detail": f"No API key provided. Please set {api_key_name} in your configuration."}
    -    
    -    client = AsyncOpenAI(api_key=api_key)
    -    return client, None
    -
    -def _create_azure_openai_client():
    -    """Helper function to create an Azure OpenAI client."""
    -    azure_endpoint = Configuration.get("AZURE_OPENAI_ENDPOINT")
    -    azure_api_version = Configuration.get("AZURE_OPENAI_API_VERSION")
    -    azure_api_key = Configuration.get("AZURE_OPENAI_API_KEY")
    -
    -    missing_keys = []
    -    if not azure_endpoint:
    -        missing_keys.append("AZURE_OPENAI_ENDPOINT")
    -    if not azure_api_version:
    -        missing_keys.append("AZURE_OPENAI_API_VERSION")
    -    if not azure_api_key:
    -        missing_keys.append("AZURE_OPENAI_API_KEY")
    -
    -    if missing_keys:
    -        return None, {
    -            "status_code": 401,
    -            "detail": f"No API key provided. Please set {', '.join(missing_keys)} in your configuration."
    -        }
    -
    -    client = AsyncAzureOpenAI(
    -        api_version=azure_api_version, 
    -        azure_endpoint=azure_endpoint, 
    -        api_key=azure_api_key
    -    )
    -    return client, None
    -
    -def _create_openai_model(model_name: str, api_key_name: str = "OPENAI_API_KEY"):
    -    """Helper function to create an OpenAI model with specified model name and API key."""
    -    client, error = _create_openai_client(api_key_name)
    -    if error:
    -        return None, error
    -    return OpenAIModel(model_name, provider=OpenAIProvider(openai_client=client)), None
    -
    -def _create_azure_openai_model(model_name: str):
    -    """Helper function to create an Azure OpenAI model with specified model name."""
    -    client, error = _create_azure_openai_client()
    -    if error:
    -        return None, error
    -    return OpenAIModel(model_name, provider=OpenAIProvider(openai_client=client)), None
    -
    -def _create_deepseek_model():
    -    """Helper function to create a Deepseek model."""
    -    deepseek_api_key = Configuration.get("DEEPSEEK_API_KEY")
    -    if not deepseek_api_key:
    -        return None, {"status_code": 401, "detail": "No API key provided. Please set DEEPSEEK_API_KEY in your configuration."}
    -
    -    return OpenAIModel(
    -        'deepseek-chat',
    -        provider=OpenAIProvider(
    -            base_url='https://api.deepseek.com',
    -            api_key=deepseek_api_key
    -        )
    -    ), None
    -
    -def _create_ollama_model(model_name: str):
    -    """Helper function to create an Ollama model with specified model name."""
    -    # Ollama runs locally, so we don't need API keys
    -    base_url = Configuration.get("OLLAMA_BASE_URL", "http://localhost:11434/v1")
    -    return OpenAIModel(
    -        model_name,
    -        provider=OpenAIProvider(base_url=base_url)
    -    ), None
    -
    -def _create_openrouter_model(model_name: str):
    -    """Helper function to create an OpenRouter model with specified model name."""
    -    api_key = Configuration.get("OPENROUTER_API_KEY")
    -    if not api_key:
    -        return None, {"status_code": 401, "detail": "No API key provided. Please set OPENROUTER_API_KEY in your configuration."}
    -    
    -    # If model_name starts with openrouter/, remove it
    -    if model_name.startswith("openrouter/"):
    -        model_name = model_name.split("openrouter/", 1)[1]
    -    
    -    return OpenAIModel(
    -        model_name,
    -        provider=OpenAIProvider(
    -            base_url='https://openrouter.ai/api/v1',
    -            api_key=api_key
    -        )
    -    ), None
    -
    -def _create_gemini_model(model_name: str):
    -    """Helper function to create a Gemini model with specified model name."""
    -    api_key = Configuration.get("GOOGLE_GLA_API_KEY")
    -    if not api_key:
    -        return None, {"status_code": 401, "detail": "No API key provided. Please set GOOGLE_GLA_API_KEY in your configuration."}
    -    
    -    return GeminiModel(
    -        model_name,
    -        provider=GoogleGLAProvider(api_key=api_key)
    -    ), None
    -
    -def _create_anthropic_model(model_name: str):
    -    """Helper function to create an Anthropic model with specified model name."""
    -    anthropic_api_key = Configuration.get("ANTHROPIC_API_KEY")
    -    if not anthropic_api_key:
    -        return None, {"status_code": 401, "detail": "No API key provided. Please set ANTHROPIC_API_KEY in your configuration."}
    -    return AnthropicModel(model_name, provider=AnthropicProvider(api_key=anthropic_api_key)), None
    -
    -def _create_bedrock_anthropic_model(model_name: str):
    -    """Helper function to create an AWS Bedrock Anthropic model with specified model name."""
    -    aws_access_key_id = Configuration.get("AWS_ACCESS_KEY_ID")
    -    aws_secret_access_key = Configuration.get("AWS_SECRET_ACCESS_KEY")
    -    aws_region = Configuration.get("AWS_REGION")
    -
    -    if not aws_access_key_id or not aws_secret_access_key or not aws_region:
    -        return None, {"status_code": 401, "detail": "No AWS credentials provided. Please set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION in your configuration."}
    -    
    -    bedrock_client = AsyncAnthropicBedrock(
    -        aws_access_key=aws_access_key_id,
    -        aws_secret_key=aws_secret_access_key,
    -        aws_region=aws_region
    -    )
    -
    -    return AnthropicModel(model_name, provider=AnthropicProvider(anthropic_client=bedrock_client)), None
    -
    -def _process_context(context):
    -    """Process context data into a formatted context string."""
    -    if context is None:
    -        return ""
    -        
    -    if not isinstance(context, list):
    -        context = [context]
    -        
    -    context_string = ""
    -    for each in context:
    -        from ...client.level_two.agent import Characterization
    -        from ...client.level_two.agent import OtherTask
    -        from ...client.tasks.tasks import Task
    -        from ...client.tasks.task_response import ObjectResponse
    -        from ...client.knowledge_base.knowledge_base import KnowledgeBase
    -        type_string = type(each).__name__
    -        the_class_string = None
    -        try:
    -            the_class_string = each.__bases__[0].__name__
    -        except:
    -            pass
    -        
    -        if type_string == Characterization.__name__:
    -            context_string += f"\n\nThis is your character ```character {each.model_dump()}```"
    -        elif type_string == OtherTask.__name__:
    -            context_string += f"\n\nContexts from question answering: ```question_answering question: {each.task} answer: {each.result}```"
    -        elif type_string == Task.__name__:
    -            response = None
    -            description = each.description
    -            try:
    -                response = each.response.dict()
    -            except:
    -                try:
    -                    response = each.response.model_dump()
    -                except:
    -                    response = each.response
    -                    
    -            context_string += f"\n\nContexts from question answering: ```question_answering question: {description} answer: {response}```   "
    -        elif the_class_string == ObjectResponse.__name__ or the_class_string == BaseModel.__name__:
    -            context_string += f"\n\nContexts from object response: ```Requested Output {each.model_fields}```"
    -        else:
    -            context_string += f"\n\nContexts ```context {each}```"
    -            
    -    return context_string
    -
    -def _setup_tools(roulette_agent, tools, llm_model):
    -    """Set up the tools for the agent."""
    -    the_wrapped_tools = []
    -
    -    # First check for ComputerUse tools compatibility
    -    if "ComputerUse.*" in tools:
    -        if not has_capability(llm_model, "computer_use"):
    -            return {
    -                "status_code": 405,
    -                "detail": f"ComputerUse tools are not supported by the model {llm_model}. Please use a model that supports computer_use capability."
    -            }
    -
    -    # Set up function tools
    -    with FunctionToolManager() as function_client:
    -        the_list_of_tools = function_client.get_tools_by_name(tools)
    -
    -        for each in the_list_of_tools:
    -            wrapped_tool = tool_wrapper(each)
    -            the_wrapped_tools.append(wrapped_tool)
    -        
    -    for each in the_wrapped_tools:
    -        signature = inspect.signature(each)
    -        roulette_agent.tool_plain(each, retries=5)
    -
    -    # Set up ComputerUse tools for models with that capability
    -    if "ComputerUse.*" in tools:
    -        try:
    -            from .cu import ComputerUse_tools
    -            for each in ComputerUse_tools:
    -                roulette_agent.tool_plain(each, retries=5)
    -        except Exception as e:
    -            print(f"Error setting up ComputerUse tools: {e}")
    -
    -    # Set up BrowserUse tools
    -    if "BrowserUse.*" in tools:
    -        try:
    -            from .bu import BrowserUse_tools
    -            from .bu.browseruse import LLMManager
    -            LLMManager.set_model(llm_model)
    -
    -            for each in BrowserUse_tools:
    -                roulette_agent.tool_plain(each, retries=5)
    -        except Exception as e:
    -            print(f"Error setting up BrowserUse tools: {e}")
    -            
    -    return roulette_agent
    -
    -def _create_model_from_registry(llm_model: str):
    -    """Create a model instance based on the registry entry."""
    -    registry_entry = get_model_registry_entry(llm_model)
    -    if not registry_entry:
    -        return None, {"status_code": 400, "detail": f"Unsupported LLM model: {llm_model}"}
    -    
    -    provider = registry_entry["provider"]
    -    model_name = registry_entry["model_name"]
    -    
    -    if provider == "openai":
    -        api_key = registry_entry.get("api_key", "OPENAI_API_KEY")
    -        return _create_openai_model(model_name, api_key)
    -    elif provider == "azure_openai":
    -        return _create_azure_openai_model(model_name)
    -    elif provider == "deepseek":
    -        return _create_deepseek_model()
    -    elif provider == "anthropic":
    -        return _create_anthropic_model(model_name)
    -    elif provider == "bedrock_anthropic":
    -        return _create_bedrock_anthropic_model(model_name)
    -    elif provider == "ollama":
    -        return _create_ollama_model(model_name)
    -    elif provider == "openrouter":
    -        return _create_openrouter_model(model_name)
    -    elif provider == "gemini":
    -        return _create_gemini_model(model_name)
    -    else:
    -        return None, {"status_code": 400, "detail": f"Unsupported provider: {provider}"}
    -
    -def agent_creator(
    -        response_format: BaseModel = str,
    -        tools: list[str] = [],
    -        context: Any = None,
    -        llm_model: str = None,
    -        system_prompt: Optional[Any] = None,
    -        context_compress: bool = False
    -    ):
    -        # Use default model if none provided
    -        if llm_model is None:
    -            llm_model = "openai/gpt-4o"
    -            print(f"No model specified, using default: {llm_model}")
    -        
    -        # Get the model from registry
    -        model, error = _create_model_from_registry(llm_model)
    -        if error:
    -            return error
    -
    -        # Process context
    -        context_string = _process_context(context)
    -
    -        # Compress context string if enabled
    -        if context_compress and context_string:
    -            context_string = summarize_context_string(context_string, llm_model)
    -
    -        # Prepare system prompt
    -        system_prompt_ = ()
    -        if system_prompt is not None:
    -            system_prompt_ = system_prompt + f"The context is: {context_string}"
    -        elif context_string != "":
    -            system_prompt_ = f"You are a helpful assistant. User want to add an context to the task. The context is: {context_string}"
    -        
    -        # Get the appropriate model settings based on the model type
    -        model_settings = get_model_settings(llm_model, tools)
    -
    -        # Create the agent
    -        roulette_agent = Agent(
    -            model,
    -            result_type=response_format,
    -            retries=5,
    -            system_prompt=system_prompt_,
    -            model_settings=model_settings
    -        )
    -
    -        # Set up tools and check for errors
    -        result = _setup_tools(roulette_agent, tools, llm_model)
    -        
    -        # If result is a dict, it means there was an error
    -        if isinstance(result, dict) and "status_code" in result:
    -            return result
    -
    -        return result
    -
    
  • src/upsonic/server_manager.py+0 211 removed
    @@ -1,211 +0,0 @@
    -import os
    -import signal
    -import sys
    -import time
    -import socket
    -import subprocess
    -import traceback
    -import psutil
    -from contextlib import closing
    -from typing import Optional
    -
    -class ServerManager:
    -    def __init__(self, app_path: str, host: str, port: int, name: str):
    -        self.app_path = app_path
    -        self.host = host
    -        self.port = port
    -        self.name = name
    -        self._process: Optional[subprocess.Popen] = None
    -        self._pid_file = os.path.join(os.path.expanduser("~"), f".upsonic_{name}_server.pid")
    -
    -    def _is_port_in_use(self) -> bool:
    -        """Check if the port is in use."""
    -        try:
    -            # Faster method to check port availability
    -            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    -            sock.settimeout(0.2)  # Reduce timeout for faster checks
    -            result = sock.connect_ex((self.host, self.port))
    -            sock.close()
    -            return result == 0
    -        except Exception:
    -            # Fallback to the original method
    -            with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as sock:
    -                return sock.connect_ex((self.host, self.port)) == 0
    -
    -    def _kill_process_using_port(self) -> bool:
    -        """Find and kill processes using the specified port."""
    -        killed = False
    -        
    -        for proc in psutil.process_iter(['pid', 'name']):
    -            try:
    -                process = psutil.Process(proc.info['pid'])
    -                for conn in process.net_connections():
    -                    if hasattr(conn, 'laddr') and len(conn.laddr) >= 2 and conn.laddr[1] == self.port:
    -                        try:
    -                            # Kill process more aggressively
    -                            process.kill()
    -                            process.wait(timeout=0.5)  # Reduced timeout
    -                            killed = True
    -                        except Exception:
    -                            # Last resort: try system kill
    -                            try:
    -                                os.kill(process.pid, signal.SIGKILL)
    -                                killed = True
    -                            except Exception:
    -                                pass
    -            except Exception:
    -                continue
    -                
    -        return killed
    -
    -    def _manage_pid_file(self, operation: str):
    -        """Manage PID file operations (read/write/cleanup)."""
    -        try:
    -            if operation == "write" and self._process and self._process.pid:
    -                with open(self._pid_file, 'w') as f:
    -                    f.write(str(self._process.pid))
    -            elif operation == "read" and os.path.exists(self._pid_file):
    -                with open(self._pid_file, 'r') as f:
    -                    return int(f.read().strip())
    -            elif operation == "cleanup" and os.path.exists(self._pid_file):
    -                os.remove(self._pid_file)
    -        except Exception:
    -            pass
    -        
    -        return None if operation == "read" else False
    -
    -    def _cleanup_port(self):
    -        """Clean up the port before starting."""
    -        if not self._is_port_in_use():
    -            return True
    -            
    -        # Try to kill processes using the port
    -        self._kill_process_using_port()
    -        time.sleep(0.2)  # Reduced sleep time
    -        
    -        # If port is still in use, try one more aggressive approach
    -        if self._is_port_in_use():
    -            try:
    -                # Try to bind to the port to force it free
    -                sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    -                sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
    -                sock.bind((self.host, self.port))
    -                sock.close()
    -            except socket.error:
    -                # If binding fails, try one more kill attempt
    -                for proc in psutil.process_iter(['pid']):
    -                    try:
    -                        process = psutil.Process(proc.info['pid'])
    -                        if any(len(conn.laddr) >= 2 and conn.laddr[1] == self.port 
    -                              for conn in process.net_connections(kind='inet')):
    -                            process.kill()
    -                    except Exception:
    -                        continue
    -                time.sleep(0.2)  # Reduced sleep time
    -        
    -        return not self._is_port_in_use()
    -
    -    def start(self, redirect_output: bool = False, force: bool = False):
    -        """Start the server if it's not already running."""
    -        if self.is_running():
    -            return
    -            
    -        # Clean up port
    -        if not self._cleanup_port() and not force:
    -            raise RuntimeError(f"Port {self.port} is in use and could not be freed")
    -
    -        # Set up logging
    -        stdout = stderr = None
    -        if redirect_output:
    -            log_dir = "logs"
    -            os.makedirs(log_dir, exist_ok=True)
    -            stdout = open(os.path.join(log_dir, f'{self.name}_server.log'), 'a')
    -            stderr = open(os.path.join(log_dir, f'{self.name}_server_error.log'), 'a')
    -
    -        # Start the server process
    -        try:
    -            workers_amount = os.getenv("UPSONIC_WORKERS_AMOUNT", 1)
    -            if self.app_path == "upsonic.tools_server.server.api:app":
    -                workers_amount = 1
    -            
    -            cmd = [
    -                sys.executable, "-m", "uvicorn",
    -                self.app_path,
    -                "--host", self.host,
    -                "--port", str(self.port),
    -                "--log-level", "error",
    -                "--no-access-log",
    -                "--workers", str(workers_amount)
    -            ]
    -            
    -            self._process = subprocess.Popen(
    -                cmd, stdout=stdout, stderr=stderr, start_new_session=True
    -            )
    -            self._manage_pid_file("write")
    -
    -            # Wait for server to start with optimized polling
    -            # Initial quick sleep to allow process to start
    -            time.sleep(0.1)
    -            
    -            # Progressive polling with increasing intervals
    -            poll_interval = 0.01
    -            max_poll_interval = 0.1
    -            start_time = time.time()
    -            while not self._is_port_in_use() and time.time() - start_time < 300:
    -                if self._process.poll() is not None:
    -                    raise RuntimeError(f"Server process terminated unexpectedly with code {self._process.returncode}")
    -                time.sleep(poll_interval)
    -                # Gradually increase polling interval
    -                poll_interval = min(poll_interval * 1.2, max_poll_interval)
    -
    -            if not self._is_port_in_use():
    -                raise RuntimeError(f"Timeout waiting for {self.name} server to start")
    -                
    -        except Exception as e:
    -            self.stop()
    -            traceback.print_exc()
    -            raise RuntimeError(f"Failed to start {self.name} server: {str(e)}")
    -
    -    def stop(self):
    -        """Stop the server if it's running."""
    -        # Try to stop process from PID file
    -        pid = self._manage_pid_file("read")
    -        if pid:
    -            try:
    -                process = psutil.Process(pid)
    -                try:
    -                    os.killpg(os.getpgid(process.pid), signal.SIGTERM)
    -                    process.wait(timeout=5)
    -                except Exception:
    -                    os.killpg(os.getpgid(process.pid), signal.SIGKILL)
    -            except Exception:
    -                pass
    -
    -        # Try to stop process from instance variable
    -        if self._process:
    -            try:
    -                self._process.terminate()
    -                self._process.wait(timeout=5)
    -            except Exception:
    -                if self._process.poll() is None:
    -                    self._process.kill()
    -
    -        self._process = None
    -        self._manage_pid_file("cleanup")
    -
    -    def is_running(self) -> bool:
    -        """Check if the server is currently running."""
    -        # Check if process from instance is running
    -        if self._process and self._process.poll() is None:
    -            return True
    -
    -        # Check if process from PID file is running
    -        pid = self._manage_pid_file("read")
    -        if pid:
    -            try:
    -                process = psutil.Process(pid)
    -                return process.is_running() and process.name().startswith("python")
    -            except psutil.NoSuchProcess:
    -                self._manage_pid_file("cleanup")
    -                
    -        return False 
    \ No newline at end of file
    
  • src/upsonic/server/markdown/server/server.py+0 60 removed
    @@ -1,60 +0,0 @@
    -from fastapi import HTTPException, UploadFile, File
    -from pydantic import BaseModel
    -from typing import List, Dict, Any, Optional, Union
    -import traceback
    -from ...api import app, timeout
    -
    -import asyncio
    -from concurrent.futures import ThreadPoolExecutor
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -from ....storage.configuration import Configuration
    -import os
    -import tempfile
    -
    -
    -
    -prefix = "/markdown"
    -
    -
    -
    -@app.post(f"{prefix}/upload")
    -async def upload_file(file: UploadFile = File(...)):
    -    """
    -    Endpoint to upload a file and convert it to markdown.
    -
    -    Args:
    -        file: The file to convert to markdown
    -
    -    Returns:
    -        The markdown content
    -    """
    -    try:
    -        # Create a temporary directory if it doesn't exist
    -        temp_dir = os.path.join(tempfile.gettempdir(), "upsonic_uploads")
    -        os.makedirs(temp_dir, exist_ok=True)
    -
    -        # Save the uploaded file
    -        file_path = os.path.join(temp_dir, file.filename)
    -        with open(file_path, "wb") as f:
    -            content = await file.read()
    -            f.write(content)
    -
    -        # Convert to markdown
    -        from markitdown import MarkItDown
    -
    -        md = MarkItDown()
    -        markdown_content = md.convert(file_path).text_content
    -
    -        # Add filename as heading
    -        markdown_with_filename = f"# {file.filename}\n\n{markdown_content}"
    -
    -        # Clean up
    -        os.remove(file_path)
    -
    -        return {"markdown": markdown_with_filename}
    -    except Exception as e:
    -        print(traceback.format_exc())
    -        raise HTTPException(status_code=500, detail=str(e))
    -
    
  • src/upsonic/server/others/server/server.py+0 69 removed
    @@ -1,69 +0,0 @@
    -from fastapi import HTTPException, UploadFile, File
    -from pydantic import BaseModel
    -from typing import List, Dict, Any, Optional, Union
    -import traceback
    -from ...api import app, timeout
    -
    -import asyncio
    -from concurrent.futures import ThreadPoolExecutor
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -from ....storage.configuration import Configuration
    -import os
    -import tempfile
    -
    -
    -from fastapi.responses import FileResponse
    -import uuid
    -
    -
    -
    -
    -prefix = "/others"
    -
    -
    -
    -@app.get(f"{prefix}/take_screenshot")
    -async def take_screenshot():
    -    """
    -    Takes a screenshot using pyautogui and returns it to the client.
    -
    -    Returns:
    -        The screenshot image file
    -    """
    -    import pyautogui
    -    try:
    -        # Create a temporary directory if it doesn't exist
    -        temp_dir = os.path.join(tempfile.gettempdir(), "upsonic_screenshots")
    -        os.makedirs(temp_dir, exist_ok=True)
    -
    -        # Generate a unique filename
    -        filename = f"screenshot_{uuid.uuid4()}.png"
    -        file_path = os.path.join(temp_dir, filename)
    -
    -        # Take the screenshot
    -        screenshot = pyautogui.screenshot()
    -        screenshot.save(file_path)
    -
    -        # Return the file and clean up after sending
    -        return FileResponse(
    -            file_path,
    -            media_type="image/png",
    -            filename=filename,
    -            background=asyncio.create_task(cleanup_screenshot(file_path))
    -        )
    -    except Exception as e:
    -        print(traceback.format_exc())
    -        raise HTTPException(status_code=500, detail=str(e))
    -
    -async def cleanup_screenshot(file_path: str):
    -    """
    -    Cleanup function to remove the screenshot file after it's been sent.
    -    """
    -    try:
    -        await asyncio.sleep(1)  # Wait a bit to ensure the file has been sent
    -        if os.path.exists(file_path):
    -            os.remove(file_path)
    -    except Exception as e:
    -        print(f"Error cleaning up screenshot: {e}")
    
  • src/upsonic/server/storage/server/server.py+0 74 removed
    @@ -1,74 +0,0 @@
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -from typing import List, Dict, Any, Optional, Union
    -import traceback
    -from ...api import app, timeout
    -
    -import asyncio
    -from concurrent.futures import ThreadPoolExecutor
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -from ....storage.configuration import Configuration
    -
    -
    -prefix = "/storage"
    -
    -
    -
    -class ConfigGetRequest(BaseModel):
    -    key: str
    -
    -class ConfigSetRequest(BaseModel):
    -    key: str
    -    value: str
    -
    -class BulkConfigSetRequest(BaseModel):
    -    configs: Dict[str, str]
    -
    -
    -@app.post(f"{prefix}/config/get")
    -async def get_config(request: ConfigGetRequest):
    -    """
    -    Endpoint to get a configuration value by key using POST.
    -
    -    Args:
    -        key: The configuration key
    -
    -    Returns:
    -        The configuration value or a default message if not found
    -    """
    -    value = Configuration.get(request.key)
    -    return {"key": request.key, "value": value}
    -
    -
    -@app.post(f"{prefix}/config/set")
    -async def set_config(request: ConfigSetRequest):
    -    """
    -    Endpoint to set a configuration value.
    -
    -    Args:
    -        key: The configuration key
    -        value: The configuration value
    -
    -    Returns:
    -        A success message
    -    """
    -    Configuration.set(request.key, request.value)
    -    return {"message": "Configuration updated successfully"}
    -
    -@app.post(f"{prefix}/config/bulk_set")
    -async def bulk_set_config(request: BulkConfigSetRequest):
    -    """
    -    Endpoint to set multiple configuration values at once.
    -
    -    Args:
    -        configs: Dictionary of configuration key-value pairs
    -
    -    Returns:
    -        A success message
    -    """
    -    for key, value in request.configs.items():
    -        Configuration.set(key, value)
    -    return {"message": "Bulk configuration updated successfully"}
    -
    
  • src/upsonic/server/tools/server.py+0 94 removed
    @@ -1,94 +0,0 @@
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -from typing import List, Dict, Any, Optional, Union
    -import traceback
    -from ..api import app, timeout
    -from ...tools_server.tools_client import ToolManager
    -import asyncio
    -from concurrent.futures import ThreadPoolExecutor
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import base64
    -
    -
    -prefix = "/tools"
    -
    -
    -class InstallLibraryRequest(BaseModel):
    -    library: str
    -
    -
    -class CustomToolRequest(BaseModel):
    -    function: str
    -
    -
    -@app.post(f"{prefix}/install_library")
    -async def install_library(request: InstallLibraryRequest):
    -    """
    -    Endpoint to install a library.
    -
    -    Args:
    -        library: The library to install
    -
    -    Returns:
    -        A success message
    -    """
    -    with ToolManager() as tool_client:
    -        tool_client.install_library(request.library)
    -    return {"message": "Library installed successfully"}
    -
    -
    -
    -@app.post(f"{prefix}/uninstall_library")
    -async def uninstall_library(request: InstallLibraryRequest):
    -    """
    -    Endpoint to uninstall a library.
    -    """
    -    with ToolManager() as tool_client:
    -        tool_client.uninstall_library(request.library)
    -    return {"message": "Library uninstalled successfully"}
    -
    -
    -class AddToolRequest(BaseModel):
    -    function: Any
    -
    -@app.post(f"{prefix}/add_tool")
    -async def add_tool(request: AddToolRequest):
    -    """
    -    Endpoint to add a tool.
    -    """
    -    with ToolManager() as tool_client:
    -        tool_client.add_tool(request.function)
    -    return {"message": "Tool added successfully"}
    -
    -
    -class AddMCPToolRequest(BaseModel):
    -    name: str
    -    command: str
    -    args: List[str]
    -    env: Dict[str, str]
    -
    -class AddSSEMCPToolRequest(BaseModel):
    -    name: str
    -    url: str
    -
    -@app.post(f"{prefix}/add_mcp_tool")
    -async def add_mcp_tool(request: AddMCPToolRequest):
    -    """
    -    Endpoint to add a tool.
    -    """
    -    try:
    -        with ToolManager() as tool_client:
    -            tool_client.add_mcp_tool(request.name, request.command, request.args, request.env)
    -        return {"status_code": 200, "message": "Tool added successfully"}
    -    except Exception as e:
    -        return {"status_code": 500, "message": f"Error adding tool: This tool seems not okay to use."}
    -    
    -@app.post(f"{prefix}/add_sse_mcp")
    -async def add_sse_mcp(request: AddSSEMCPToolRequest):
    -    """
    -    Endpoint to add a tool.
    -    """
    -    with ToolManager() as tool_client:
    -        tool_client.add_sse_mcp(request.name, request.url)
    -    return {"status_code": 200, "message": "Tool added successfully"}
    \ No newline at end of file
    
  • src/upsonic/storage/caching.py+0 73 removed
    @@ -1,73 +0,0 @@
    -"""
    -Module for handling caching of data using SQLite.
    -"""
    -
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -import dill
    -import base64
    -import time
    -from typing import Optional, Any
    -from .configuration import ClientConfiguration
    -
    -
    -def save_to_cache_with_expiry(data: Any, cache_key: str, expiry_seconds: int) -> None:
    -    """
    -    Save data to cache with expiration time.
    -    
    -    Args:
    -        data: Any data to store in cache
    -        cache_key: Unique identifier for the cached data
    -        expiry_seconds: Number of seconds until the cache expires
    -    """
    -    the_module = dill.detect.getmodule(data)
    -    if the_module is not None:
    -        cloudpickle.register_pickle_by_value(the_module)
    -        
    -    current_time = int(time.time())
    -    expiry_time = current_time + expiry_seconds
    -    cache_key_full = f"cache_{cache_key}"
    -    
    -    cache_data = {
    -        'data': data,
    -        'expiry_time': expiry_time,
    -        'created_at': current_time
    -    }
    -    
    -    try:
    -        ClientConfiguration.delete(cache_key_full)
    -        serialized_data = base64.b64encode(cloudpickle.dumps(cache_data)).decode('utf-8')
    -        ClientConfiguration.set(cache_key_full, serialized_data)
    -    except Exception:
    -        ClientConfiguration.delete(cache_key_full)
    -        raise
    -
    -
    -def get_from_cache_with_expiry(cache_key: str) -> Optional[Any]:
    -    """
    -    Retrieve data from cache if not expired.
    -    
    -    Args:
    -        cache_key: Unique identifier for the cached data
    -        
    -    Returns:
    -        Cached data if found and not expired, None otherwise
    -    """
    -    cache_key_full = f"cache_{cache_key}"
    -    serialized_data = ClientConfiguration.get(cache_key_full)
    -
    -    if serialized_data is None:
    -        return None
    -    
    -    try:
    -        cache_data = cloudpickle.loads(base64.b64decode(serialized_data))
    -        current_time = int(time.time())
    -        
    -        if current_time > cache_data['expiry_time']:
    -            ClientConfiguration.delete(cache_key_full)
    -            return None
    -
    -        return cache_data['data']
    -    except Exception:
    -        ClientConfiguration.delete(cache_key_full)
    -        return None
    \ No newline at end of file
    
  • src/upsonic/storage/configuration.py+0 144 removed
    @@ -1,144 +0,0 @@
    -import os
    -import sqlite3
    -import json
    -from dotenv import load_dotenv
    -import signal
    -import sys
    -import threading
    -import logging
    -from contextlib import contextmanager
    -from .folder import BASE_PATH
    -
    -
    -class ConfigManager:
    -    def __init__(self, db_name="config.sqlite"):
    -        self.db_path = os.path.join(BASE_PATH, db_name)
    -        self._local = threading.local()
    -        self._setup_database()
    -        
    -        # Only set up signal handlers if we're in the main thread
    -        if threading.current_thread() is threading.main_thread():
    -            try:
    -                signal.signal(signal.SIGTERM, self._handle_signal)
    -                signal.signal(signal.SIGINT, self._handle_signal)
    -            except ValueError:
    -                # Ignore signal handling errors if we can't set them up
    -                pass
    -
    -    def _setup_database(self):
    -        with self._get_connection() as conn:
    -            conn.execute('''
    -                CREATE TABLE IF NOT EXISTS config_store (
    -                    key TEXT PRIMARY KEY,
    -                    value TEXT NOT NULL
    -                )
    -            ''')
    -            conn.commit()
    -
    -    @contextmanager
    -    def _get_connection(self):
    -        if not hasattr(self._local, 'conn'):
    -            self._local.conn = sqlite3.connect(self.db_path)
    -        try:
    -            yield self._local.conn
    -        except sqlite3.Error as e:
    -            logging.error(f"Database error: {e}")
    -            raise
    -        except Exception as e:
    -            logging.error(f"Unexpected error: {e}")
    -            raise
    -
    -    def _handle_signal(self, signum, frame):
    -        self.close_all_connections()
    -        try:
    -            sys.exit(0)
    -        except SystemExit:
    -            os._exit(0)
    -
    -    def close_all_connections(self):
    -        if hasattr(self._local, 'conn'):
    -            try:
    -                self._local.conn.commit()
    -                self._local.conn.close()
    -                del self._local.conn
    -            except Exception as e:
    -                logging.error(f"Error closing connection: {e}")
    -
    -    def initialize(self, key):
    -        load_dotenv()
    -        value = os.getenv(key)
    -        if value is not None:
    -            self.set(key, value)
    -
    -    def get(self, key, default=None):
    -        try:
    -            with self._get_connection() as conn:
    -                cursor = conn.cursor()
    -                cursor.execute('SELECT value FROM config_store WHERE key = ?', (key,))
    -                result = cursor.fetchone()
    -                return json.loads(result[0]) if result else default
    -        except (sqlite3.Error, json.JSONDecodeError) as e:
    -            logging.error(f"Error retrieving key {key}: {e}")
    -            return default
    -
    -    def delete(self, key):
    -        try:
    -            with self._get_connection() as conn:
    -                cursor = conn.cursor()
    -                cursor.execute('DELETE FROM config_store WHERE key = ?', (key,))
    -                conn.commit()
    -                return cursor.rowcount > 0
    -        except sqlite3.Error as e:
    -            logging.error(f"Error deleting key {key}: {e}")
    -            return False
    -
    -    def set(self, key, value):
    -        try:
    -            value_json = json.dumps(value)
    -            with self._get_connection() as conn:
    -                cursor = conn.cursor()
    -                cursor.execute('REPLACE INTO config_store (key, value) VALUES (?, ?)',
    -                          (key, value_json))
    -                conn.commit()
    -                return True
    -        except (sqlite3.Error, json.JSONEncodeError) as e:
    -            logging.error(f"Error setting key {key}: {e}")
    -            return False
    -
    -    def dump(self):
    -        try:
    -            with self._get_connection() as conn:
    -                conn.commit()
    -                return True
    -        except sqlite3.Error as e:
    -            logging.error(f"Error dumping database: {e}")
    -            return False
    -
    -    def __enter__(self):
    -        return self
    -
    -    def __exit__(self, exc_type, exc_val, exc_tb):
    -        self.close_all_connections()
    -
    -    def __del__(self):
    -        self.close_all_connections()
    -
    -
    -# Create a single instance of ConfigManager
    -Configuration = ConfigManager()
    -
    -Configuration.initialize("OPENAI_API_KEY")
    -Configuration.initialize("ANTHROPIC_API_KEY")
    -Configuration.initialize("AZURE_OPENAI_ENDPOINT")
    -Configuration.initialize("AZURE_OPENAI_API_VERSION")
    -Configuration.initialize("AZURE_OPENAI_API_KEY")
    -Configuration.initialize("AWS_ACCESS_KEY_ID")
    -Configuration.initialize("AWS_SECRET_ACCESS_KEY")
    -Configuration.initialize("AWS_REGION")
    -Configuration.initialize("DEEPSEEK_API_KEY")
    -Configuration.initialize("GOOGLE_GLA_API_KEY")
    -Configuration.initialize("OPENROUTER_API_KEY")
    -
    -Configuration.initialize("OLLAMA_BASE_URL")
    -
    -ClientConfiguration = ConfigManager(db_name="client_config.sqlite")
    \ No newline at end of file
    
  • src/upsonic/storage/folder.py+0 11 removed
    @@ -1,11 +0,0 @@
    -import os
    -
    -from dotenv import load_dotenv
    -
    -load_dotenv()
    -
    -# Define a variable to store the current file's directory path
    -if os.getenv("USE_WORKDIR", "false").lower() == "true":
    -    BASE_PATH = os.path.dirname(os.getcwd())
    -else:
    -    BASE_PATH = os.path.dirname(os.path.abspath(__file__))
    
  • src/upsonic/tasks/task_response.py+0 0 renamed
  • src/upsonic/tasks/tasks.py+7 3 renamed
    @@ -6,15 +6,17 @@
     
     
     from .task_response import ObjectResponse
    -from ..printing import get_price_id_total_cost
    +from ..utils.printing import get_price_id_total_cost
    +from ..utils.error_wrapper import upsonic_error_handler
     
     from ..knowledge_base.knowledge_base import KnowledgeBase
     
     class Task(BaseModel):
         description: str
         images: Optional[List[str]] = None
         tools: list[Any] = []
    -    response_format: Union[Type[ObjectResponse], Type[BaseModel], None] = None
    +    response_format: Union[Type[ObjectResponse], Type[BaseModel], type[str], None] = str
    +    response_lang: str = "en"
         _response: Any = None
         context: Any = None
         price_id_: Optional[str] = None
    @@ -32,7 +34,7 @@ def __init__(
             description: str, 
             images: Optional[List[str]] = None,
             tools: list[Any] = None,
    -        response_format: Union[Type[ObjectResponse], Type[BaseModel], None] = None,
    +        response_format: Union[Type[ObjectResponse], Type[BaseModel], type[str], None] = str,
             response: Any = None,
             context: Any = None,
             price_id_: Optional[str] = None,
    @@ -73,6 +75,7 @@ def duration(self) -> Optional[float]:
                 return None
             return self.end_time - self.start_time
     
    +    @upsonic_error_handler(max_retries=2, show_error_details=True)
         def validate_tools(self):
             """
             Validates each tool in the tools list.
    @@ -92,6 +95,7 @@ def validate_tools(self):
     
     
         
    +    @upsonic_error_handler(max_retries=2, show_error_details=True)
         async def additional_description(self, client):
             if not self.context:
                 return ""
    
  • src/upsonic/tools.py+0 2043 removed
    @@ -1,2043 +0,0 @@
    -"""
    -Upsonic Tools Module
    -This module contains the tool implementations that can be used with Upsonic.
    -"""
    -from typing import Any, List, Dict, Optional, Type, Union, Callable
    -import os
    -import json
    -import requests
    -import logging
    -import pathlib
    -from datetime import datetime
    -import time
    -import re
    -from .client.printing import missing_dependencies, missing_api_key
    -
    -class Search:
    -    pass
    -
    -
    -
    -class ComputerUse:
    -    pass
    -
    -class Screenshot:
    -    pass
    -
    -class BrowserUse:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for BrowserUse and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "browser_use": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import browser_use
    -            dependencies["browser_use"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -        
    -    @staticmethod
    -    def __control__() -> bool:
    -        # Check the import browser_use
    -        try:
    -            import browser_use
    -            return True
    -        except ImportError:
    -            # Use the missing_dependencies function to display the error
    -            missing_dependencies("BrowserUse", ["browser_use"])
    -            raise ImportError("Missing dependency: browser_use. Please install it with: pip install browser-use")
    -
    -
    -
    -class Canvas:
    -    def __init__(self, canvas_name: str, llm_model: str = "openai/gpt-4o"):
    -        self.canvas_name = canvas_name
    -        self.llm_model = llm_model
    -
    -    def _save_canvas(self, canvas_text: str):
    -        """Save the canvas text to a file."""
    -        normalized_name = re.sub(r'[^\w\s-]', '', self.canvas_name).strip().replace(' ', '_')
    -        filename = f"{normalized_name}.txt"
    -        with open(filename, 'w', encoding='utf-8') as f:
    -            f.write(canvas_text)
    -
    -    def _load_canvas(self) -> str:
    -        """Load the canvas text from a file."""
    -        normalized_name = re.sub(r'[^\w\s-]', '', self.canvas_name).strip().replace(' ', '_')
    -        filename = f"{normalized_name}.txt"
    -        try:
    -            with open(filename, 'r', encoding='utf-8') as f:
    -                return f.read()
    -        except FileNotFoundError:
    -            return ""
    -
    -    def get_current_state_of_canvas(self) -> str:
    -        """Get the current state of the text canvas"""
    -        result = self._load_canvas()
    -        return "Empty Canvas" if result == "" else result
    -
    -    async def change_in_canvas(self, new_text_of_part: str, part_definition: str) -> str:
    -        """Change the text of a part of the canvas"""
    -        from upsonic import Task, Direct, UpsonicClient
    -        
    -        client = UpsonicClient("localserver", debug=True, main_port=7542, tools_port=8088)
    -        direct = Direct(model=self.llm_model, client=client)
    -        
    -        current_canvas = self.get_current_state_of_canvas()
    -        
    -        # For empty canvas, just save the new content directly
    -        if current_canvas == "Empty Canvas" or current_canvas == "":
    -            print("******** SAVING CANVAS *********")
    -            print(new_text_of_part)
    -            self._save_canvas(new_text_of_part)
    -            return new_text_of_part
    -
    -        # For existing canvas, use LLM to modify or append content
    -        prompt = (
    -            f"I have a text document with the following content:\n\n{current_canvas}\n\n"
    -            f"If there is a line or section that contains '{part_definition}', replace it with exactly:\n"
    -            f"{new_text_of_part}\n\n"
    -            f"If the document does NOT contain a section with '{part_definition}', append the following as a new section at the end of the document:\n"
    -            f"{new_text_of_part}\n\n"
    -            f"Return only the complete updated text document without any explanations, code blocks, or additional formatting."
    -        )
    -        
    -        task = Task(prompt)
    -        result = await direct.do_async(task)
    -        print("******** SAVING CANVAS *********")
    -        print(result)
    -        self._save_canvas(result)
    -        return result
    -
    -
    -
    -
    -
    -class Wikipedia:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for Wikipedia and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "wikipedia": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import wikipedia
    -            dependencies["wikipedia"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -        
    -    @staticmethod
    -    def __control__() -> bool:
    -        # Check the import wikipedia
    -        try:
    -            import wikipedia
    -            return True
    -        except ImportError:
    -            # Use the missing_dependencies function to display the error
    -            missing_dependencies("Wikipedia", ["wikipedia"])
    -            raise ImportError("Missing dependency: wikipedia. Please install it with: pip install wikipedia")
    -        
    -    def search(query: str) -> str:
    -        import wikipedia
    -        return wikipedia.search(query)
    -    
    -    def summary(query: str) -> str:
    -        import wikipedia
    -        return wikipedia.summary(query)
    -
    -
    -class DuckDuckGo:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for DuckDuckGo and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "duckduckgo_search": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import duckduckgo_search
    -            dependencies["duckduckgo_search"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -        
    -    @staticmethod
    -    def __control__() -> bool:
    -        # Check the import duckduckgo_search
    -        try:
    -            import duckduckgo_search
    -            return True
    -        except ImportError:
    -            # Use the missing_dependencies function to display the error
    -            missing_dependencies("DuckDuckGo", ["duckduckgo_search"])
    -            raise ImportError("Missing dependency: duckduckgo_search. Please install it with: pip install duckduckgo-search")
    -    
    -    def search(query: str, max_results: int = 10) -> List[Dict[str, str]]:
    -        """
    -        Search DuckDuckGo for the given query and return text results.
    -        
    -        Args:
    -            query: The search query
    -            max_results: Maximum number of results to return (default: 10)
    -            
    -        Returns:
    -            List of dictionaries containing search results with keys: title, href, body
    -        """
    -        from duckduckgo_search import DDGS
    -        
    -        ddgs = DDGS()
    -        results = list(ddgs.text(query, max_results=max_results))
    -        return results
    -    
    -
    -class SerperDev:
    -    @staticmethod
    -    def _load_api_key_from_env_file() -> Optional[str]:
    -        """
    -        Try to load the SERPER_API_KEY from a .env file using python-dotenv.
    -        
    -        Returns:
    -            The API key if found in .env file, None otherwise
    -        """
    -        try:
    -            # Try to import dotenv
    -            from dotenv import load_dotenv
    -        except ImportError:
    -            raise ImportError("python-dotenv is not installed. Please install it with 'pip install python-dotenv'")
    -        
    -        # Check for .env file in current directory and parent directories
    -        current_dir = pathlib.Path.cwd()
    -        
    -        # Look in current directory and up to 3 parent directories
    -        for _ in range(4):
    -            env_path = current_dir / '.env'
    -            if env_path.exists():
    -                # Load the .env file
    -                load_dotenv(dotenv_path=env_path)
    -                
    -                # Check if SERPER_API_KEY is now in environment
    -                if "SERPER_API_KEY" in os.environ:
    -                    return os.environ["SERPER_API_KEY"]
    -            
    -            # Move to parent directory
    -            parent_dir = current_dir.parent
    -            if parent_dir == current_dir:  # Reached root directory
    -                break
    -            current_dir = parent_dir
    -        
    -        return None
    -    
    -    def __control__(self) -> bool:
    -        # Check if requests is installed
    -        try:
    -            import requests
    -        except ImportError:
    -            raise ImportError("requests is not installed. Please install it with 'pip install requests'")
    -        
    -        # Check if SERPER_API_KEY is set in environment variables
    -        if "SERPER_API_KEY" not in os.environ:
    -            try:
    -                # Try to load API key from .env file
    -                api_key = SerperDev._load_api_key_from_env_file()
    -                if api_key is None:
    -                    # API key not found in .env file
    -                    missing_api_key("SerperDev", "SERPER_API_KEY")
    -                    raise EnvironmentError("SERPER_API_KEY environment variable is not set and could not be found in .env file")
    -            except ImportError:
    -                # If dotenv is not installed, we can't load from .env file
    -                missing_api_key("SerperDev", "SERPER_API_KEY", dotenv_support=False)
    -                raise EnvironmentError("SERPER_API_KEY environment variable is not set and python-dotenv is not installed")
    -        
    -        return True
    -    
    -    def __init__(self, base_url: str = "https://google.serper.dev", search_type: str = "search", n_results: int = 10, country: str = "us", 
    -                 location: str = None, locale: str = "en", api_key: Optional[str] = None):
    -        """
    -        Initialize the SerperDev search tool.
    -        
    -        Args:
    -            base_url: Base URL for the Serper API (default: "https://google.serper.dev")
    -            search_type: Type of search to perform (default: "search")
    -            n_results: Number of results to return (default: 10)
    -            country: Country code for search (default: "us")
    -            location: Location for search (default: None)
    -            locale: Locale for search (default: "en")
    -            api_key: Serper API key (optional, will try to load from environment if not provided)
    -        """
    -        self.base_url = base_url
    -        self.search_type = search_type
    -        self.n_results = n_results
    -        self.country = country
    -        self.location = location
    -        self.locale = locale
    -        
    -        # Set API key
    -        self.api_key = api_key
    -        
    -        # If API key not provided, try to load from environment or .env file
    -        if self.api_key is None:
    -            # First check environment variables
    -            if "SERPER_API_KEY" in os.environ:
    -                self.api_key = os.environ["SERPER_API_KEY"]
    -            else:
    -                # Try to load from .env file
    -                try:
    -                    api_key = self._load_api_key_from_env_file()
    -                    if api_key:
    -                        self.api_key = api_key
    -                    else:
    -                        # Print missing API key message
    -                        missing_api_key("SerperDev", "SERPER_API_KEY")
    -                        raise EnvironmentError("SERPER_API_KEY environment variable is not set and could not be found in .env file")
    -                except ImportError:
    -                    # If dotenv is not installed and no API key in environment
    -                    if "SERPER_API_KEY" not in os.environ:
    -                        # Print missing API key message without dotenv support
    -                        missing_api_key("SerperDev", "SERPER_API_KEY", dotenv_support=False)
    -                        raise EnvironmentError("SERPER_API_KEY environment variable is not set and python-dotenv is not installed")
    -                    self.api_key = os.environ["SERPER_API_KEY"]
    -
    -    def _get_search_url(self) -> str:
    -        """Get the appropriate endpoint URL based on search type."""
    -        search_type = self.search_type.lower()
    -        allowed_search_types = ["search", "news"]
    -        if search_type not in allowed_search_types:
    -            raise ValueError(
    -                f"Invalid search type: {search_type}. Must be one of: {', '.join(allowed_search_types)}"
    -            )
    -        return f"{self.base_url}/{search_type}"
    -
    -    def _process_knowledge_graph(self, kg: dict) -> dict:
    -        """Process knowledge graph data from search results."""
    -        return {
    -            "title": kg.get("title", ""),
    -            "type": kg.get("type", ""),
    -            "website": kg.get("website", ""),
    -            "imageUrl": kg.get("imageUrl", ""),
    -            "description": kg.get("description", ""),
    -            "descriptionSource": kg.get("descriptionSource", ""),
    -            "descriptionLink": kg.get("descriptionLink", ""),
    -            "attributes": kg.get("attributes", {}),
    -        }
    -
    -    def _process_organic_results(self, organic_results: list) -> list:
    -        """Process organic search results."""
    -        processed_results = []
    -        for result in organic_results[:self.n_results]:
    -            try:
    -                result_data = {
    -                    "title": result["title"],
    -                    "link": result["link"],
    -                    "snippet": result.get("snippet", ""),
    -                    "position": result.get("position"),
    -                }
    -
    -                if "sitelinks" in result:
    -                    result_data["sitelinks"] = [
    -                        {
    -                            "title": sitelink.get("title", ""),
    -                            "link": sitelink.get("link", ""),
    -                        }
    -                        for sitelink in result["sitelinks"]
    -                    ]
    -
    -                processed_results.append(result_data)
    -            except KeyError:
    -                continue
    -        return processed_results
    -
    -    def _process_people_also_ask(self, paa_results: list) -> list:
    -        """Process 'People Also Ask' results."""
    -        processed_results = []
    -        for result in paa_results[:self.n_results]:
    -            try:
    -                result_data = {
    -                    "question": result["question"],
    -                    "snippet": result.get("snippet", ""),
    -                    "title": result.get("title", ""),
    -                    "link": result.get("link", ""),
    -                }
    -                processed_results.append(result_data)
    -            except KeyError:
    -                continue
    -        return processed_results
    -
    -    def _process_related_searches(self, related_results: list) -> list:
    -        """Process related search results."""
    -        processed_results = []
    -        for result in related_results[:self.n_results]:
    -            try:
    -                processed_results.append({"query": result["query"]})
    -            except KeyError:
    -                continue
    -        return processed_results
    -
    -    def _process_news_results(self, news_results: list) -> list:
    -        """Process news search results."""
    -        processed_results = []
    -        for result in news_results[:self.n_results]:
    -            try:
    -                result_data = {
    -                    "title": result["title"],
    -                    "link": result["link"],
    -                    "snippet": result.get("snippet", ""),
    -                    "date": result.get("date", ""),
    -                    "source": result.get("source", ""),
    -                    "imageUrl": result.get("imageUrl", ""),
    -                }
    -                processed_results.append(result_data)
    -            except KeyError:
    -                continue
    -        return processed_results
    -
    -    def _process_search_results(self, results: dict) -> dict:
    -        """Process search results based on search type."""
    -        formatted_results = {}
    -
    -        if self.search_type == "search":
    -            if "knowledgeGraph" in results:
    -                formatted_results["knowledgeGraph"] = self._process_knowledge_graph(
    -                    results["knowledgeGraph"]
    -                )
    -
    -            if "organic" in results:
    -                formatted_results["organic"] = self._process_organic_results(
    -                    results["organic"]
    -                )
    -
    -            if "peopleAlsoAsk" in results:
    -                formatted_results["peopleAlsoAsk"] = self._process_people_also_ask(
    -                    results["peopleAlsoAsk"]
    -                )
    -
    -            if "relatedSearches" in results:
    -                formatted_results["relatedSearches"] = self._process_related_searches(
    -                    results["relatedSearches"]
    -                )
    -
    -        elif self.search_type == "news":
    -            if "news" in results:
    -                formatted_results["news"] = self._process_news_results(results["news"])
    -
    -        return formatted_results
    -
    -    def search(self, query: str) -> Dict[str, Any]:
    -        print("*************I am here")
    -        print(self.api_key)
    -
    -        print(self.base_url)
    -        print(query)
    -        """
    -        Search the web using Serper API.
    -        
    -        Args:
    -            query: The search query
    -            
    -        Returns:
    -            Dictionary containing processed search results
    -        """
    -        search_url = self._get_search_url()
    -        payload = json.dumps({"q": query, "num": self.n_results})
    -        
    -        if self.country:
    -            payload = json.dumps(json.loads(payload) | {"gl": self.country})
    -        
    -        if self.location:
    -            payload = json.dumps(json.loads(payload) | {"location": self.location})
    -            
    -        if self.locale:
    -            payload = json.dumps(json.loads(payload) | {"hl": self.locale})
    -            
    -        headers = {
    -            "X-API-KEY": self.api_key,
    -            "Content-Type": "application/json",
    -        }
    -
    -        try:
    -            response = requests.post(
    -                search_url, headers=headers, json=json.loads(payload), timeout=10
    -            )
    -            response.raise_for_status()
    -            results = response.json()
    -            
    -            if not results:
    -                raise ValueError("Empty response from Serper API")
    -                
    -            formatted_results = {
    -                "searchParameters": {
    -                    "q": query,
    -                    "type": self.search_type,
    -                    **results.get("searchParameters", {}),
    -                }
    -            }
    -
    -            formatted_results.update(self._process_search_results(results))
    -            formatted_results["credits"] = results.get("credits", 1)
    -            
    -            return formatted_results
    -            
    -        except requests.exceptions.RequestException as e:
    -            error_msg = f"Error making request to Serper API: {e}"
    -            raise RuntimeError(error_msg)
    -        except json.JSONDecodeError as e:
    -            error_msg = f"Error decoding JSON response: {e}"
    -            raise RuntimeError(error_msg)
    -
    -
    -class FirecrawlSearchTool:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for FirecrawlSearchTool and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "requests": False,
    -            "firecrawl": False,
    -            "python-dotenv": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import requests
    -            dependencies["requests"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            import firecrawl
    -            dependencies["firecrawl"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            from dotenv import load_dotenv
    -            dependencies["python-dotenv"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -    
    -    @staticmethod
    -    def _load_api_key_from_env_file() -> Optional[str]:
    -        """
    -        Try to load the FIRECRAWL_API_KEY from a .env file using python-dotenv.
    -        
    -        Returns:
    -            The API key if found in .env file, None otherwise
    -        """
    -        try:
    -            # Try to import dotenv
    -            from dotenv import load_dotenv
    -        except ImportError:
    -            raise ImportError("python-dotenv is not installed. Please install it with 'pip install python-dotenv'")
    -        
    -        # Check for .env file in current directory and parent directories
    -        current_dir = pathlib.Path.cwd()
    -        
    -        # Look in current directory and up to 3 parent directories
    -        for _ in range(4):
    -            env_path = current_dir / '.env'
    -            if env_path.exists():
    -                # Load the .env file
    -                load_dotenv(dotenv_path=env_path)
    -                
    -                # Check if FIRECRAWL_API_KEY is now in environment
    -                if "FIRECRAWL_API_KEY" in os.environ:
    -                    return os.environ["FIRECRAWL_API_KEY"]
    -            
    -            # Move to parent directory
    -            parent_dir = current_dir.parent
    -            if parent_dir == current_dir:  # Reached root directory
    -                break
    -            current_dir = parent_dir
    -        
    -        return None
    -    
    -    def __control__(self) -> bool:
    -        """
    -        Check if the required dependencies are installed and API key is available.
    -        
    -        Returns:
    -            True if all requirements are met
    -        
    -        Raises:
    -            ImportError: If required packages are not installed
    -            EnvironmentError: If API key is not available
    -        """
    -        # Analyze dependencies
    -        dependencies = self.analyze_dependencies()
    -        missing = [dep for dep, installed in dependencies.items() if not installed]
    -        
    -        # Print missing dependencies
    -        if missing:
    -            # Use the new printing function
    -            missing_dependencies("FirecrawlSearchTool", missing)
    -            
    -            # Raise ImportError with combined message for all missing dependencies
    -            install_cmd = "pip install " + " ".join(missing)
    -            raise ImportError(f"Missing dependencies: {', '.join(missing)}. Please install them with: {install_cmd}")
    -        
    -        # Check if FIRECRAWL_API_KEY is set in environment variables
    -        if "FIRECRAWL_API_KEY" not in os.environ:
    -            try:
    -                # Try to load API key from .env file
    -                api_key = FirecrawlSearchTool._load_api_key_from_env_file()
    -                if api_key is None:
    -                    # Print missing API key message
    -                    missing_api_key("FirecrawlSearchTool", "FIRECRAWL_API_KEY")
    -                    raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and could not be found in .env file")
    -            except ImportError:
    -                # If dotenv is not installed, we can't load from .env file
    -                # Print missing API key message without dotenv support
    -                missing_api_key("FirecrawlSearchTool", "FIRECRAWL_API_KEY", dotenv_support=False)
    -                raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and python-dotenv is not installed")
    -        
    -        return True
    -    
    -    def __init__(self, api_key: Optional[str] = None):
    -        """
    -        Initialize the FirecrawlSearchTool.
    -        
    -        Args:
    -            api_key: Firecrawl API key (optional, will try to load from environment if not provided)
    -        """
    -        # Set API key
    -        self.api_key = api_key
    -        
    -        # If API key not provided, try to load from environment or .env file
    -        if self.api_key is None:
    -            # First check environment variables
    -            if "FIRECRAWL_API_KEY" in os.environ:
    -                self.api_key = os.environ["FIRECRAWL_API_KEY"]
    -            else:
    -                # Try to load from .env file
    -                try:
    -                    api_key = self._load_api_key_from_env_file()
    -                    if api_key:
    -                        self.api_key = api_key
    -                    else:
    -                        # Print missing API key message
    -                        missing_api_key("FirecrawlSearchTool", "FIRECRAWL_API_KEY")
    -                        raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and could not be found in .env file")
    -                except ImportError:
    -                    # If dotenv is not installed and no API key in environment
    -                    if "FIRECRAWL_API_KEY" not in os.environ:
    -                        # Print missing API key message without dotenv support
    -                        missing_api_key("FirecrawlSearchTool", "FIRECRAWL_API_KEY", dotenv_support=False)
    -                        raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and python-dotenv is not installed")
    -                    self.api_key = os.environ["FIRECRAWL_API_KEY"]
    -        
    -        # Initialize FirecrawlApp
    -        try:
    -            from firecrawl import FirecrawlApp
    -        except ImportError:
    -            missing_dependencies("FirecrawlSearchTool", ["firecrawl-py"])
    -            raise ImportError("firecrawl-py is not installed. Please install it with 'pip install firecrawl-py'")
    -    
    -    def search(self, query: str, limit: int = 5, tbs: Optional[str] = None, 
    -               lang: str = "en", country: str = "us", location: Optional[str] = None,
    -               timeout: int = 60000, scrape_options: Optional[Dict[str, Any]] = None) -> Any:
    -        """
    -        Search the web using Firecrawl API.
    -        
    -        Args:
    -            query: The search query
    -            limit: Maximum number of results to return (default: 5)
    -            tbs: Time-based search parameter
    -            lang: Language code for search results (default: 'en')
    -            country: Country code for search results (default: 'us')
    -            location: Location parameter for search results
    -            timeout: Timeout in milliseconds (default: 60000)
    -            scrape_options: Options for scraping search results
    -            
    -        Returns:
    -            Search results from Firecrawl
    -        """
    -
    -        
    -        options = {
    -            
    -            "limit": limit,
    -            "tbs": tbs,
    -            "lang": lang,
    -            "country": country,
    -            "location": location,
    -            "timeout": timeout,
    -            "scrapeOptions": scrape_options or {},
    -        }
    -        from firecrawl import FirecrawlApp
    -        _firecrawl = FirecrawlApp(api_key=self.api_key)
    -
    -        return _firecrawl.search(query=query, params=options)
    -
    -
    -class FirecrawlScrapeWebsiteTool:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for FirecrawlScrapeWebsiteTool and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "requests": False,
    -            "firecrawl": False,
    -            "python-dotenv": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import requests
    -            dependencies["requests"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            import firecrawl
    -            dependencies["firecrawl"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            from dotenv import load_dotenv
    -            dependencies["python-dotenv"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -    
    -    def __control__(self) -> bool:
    -        """
    -        Check if the required dependencies are installed and API key is available.
    -        
    -        Returns:
    -            True if all requirements are met
    -        
    -        Raises:
    -            ImportError: If required packages are not installed
    -            EnvironmentError: If API key is not available
    -        """
    -        # Analyze dependencies
    -        dependencies = self.analyze_dependencies()
    -        missing = [dep for dep, installed in dependencies.items() if not installed]
    -        
    -        # Print missing dependencies
    -        if missing:
    -            # Use the new printing function
    -            missing_dependencies("FirecrawlScrapeWebsiteTool", missing)
    -            
    -            # Raise ImportError with combined message for all missing dependencies
    -            install_cmd = "pip install " + " ".join(missing)
    -            raise ImportError(f"Missing dependencies: {', '.join(missing)}. Please install them with: {install_cmd}")
    -        
    -        # Check if FIRECRAWL_API_KEY is set in environment variables
    -        if "FIRECRAWL_API_KEY" not in os.environ:
    -            try:
    -                # Try to load API key from .env file
    -                api_key = FirecrawlScrapeWebsiteTool._load_api_key_from_env_file()
    -                if api_key is None:
    -                    # API key not found in .env file
    -                    missing_api_key("FirecrawlScrapeWebsiteTool", "FIRECRAWL_API_KEY")
    -                    raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and could not be found in .env file")
    -            except ImportError:
    -                # If dotenv is not installed, we can't load from .env file
    -                missing_api_key("FirecrawlScrapeWebsiteTool", "FIRECRAWL_API_KEY", dotenv_support=False)
    -                raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and python-dotenv is not installed")
    -        
    -        return True
    -    
    -    @staticmethod
    -    def _load_api_key_from_env_file() -> Optional[str]:
    -        """
    -        Try to load the FIRECRAWL_API_KEY from a .env file using python-dotenv.
    -        
    -        Returns:
    -            The API key if found in .env file, None otherwise
    -        """
    -        try:
    -            # Try to import dotenv
    -            from dotenv import load_dotenv
    -        except ImportError:
    -            raise ImportError("python-dotenv is not installed. Please install it with 'pip install python-dotenv'")
    -        
    -        # Check for .env file in current directory and parent directories
    -        current_dir = pathlib.Path.cwd()
    -        
    -        # Look in current directory and up to 3 parent directories
    -        for _ in range(4):
    -            env_path = current_dir / '.env'
    -            if env_path.exists():
    -                # Load the .env file
    -                load_dotenv(dotenv_path=env_path)
    -                
    -                # Check if FIRECRAWL_API_KEY is now in environment
    -                if "FIRECRAWL_API_KEY" in os.environ:
    -                    return os.environ["FIRECRAWL_API_KEY"]
    -            
    -            # Move to parent directory
    -            parent_dir = current_dir.parent
    -            if parent_dir == current_dir:  # Reached root directory
    -                break
    -            current_dir = parent_dir
    -        
    -        return None
    -    
    -    def __init__(self, api_key: Optional[str] = None):
    -        """
    -        Initialize the FirecrawlScrapeWebsiteTool.
    -        
    -        Args:
    -            api_key: Firecrawl API key (optional, will try to load from environment if not provided)
    -        """
    -        # Set API key
    -        self.api_key = api_key
    -        
    -        # If API key not provided, try to load from environment or .env file
    -        if self.api_key is None:
    -            # First check environment variables
    -            if "FIRECRAWL_API_KEY" in os.environ:
    -                self.api_key = os.environ["FIRECRAWL_API_KEY"]
    -            else:
    -                # Try to load from .env file
    -                try:
    -                    api_key = self._load_api_key_from_env_file()
    -                    if api_key:
    -                        self.api_key = api_key
    -                    else:
    -                        # Print missing API key message
    -                        missing_api_key("FirecrawlScrapeWebsiteTool", "FIRECRAWL_API_KEY")
    -                        raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and could not be found in .env file")
    -                except ImportError:
    -                    # If dotenv is not installed and no API key in environment
    -                    if "FIRECRAWL_API_KEY" not in os.environ:
    -                        # Print missing API key message without dotenv support
    -                        missing_api_key("FirecrawlScrapeWebsiteTool", "FIRECRAWL_API_KEY", dotenv_support=False)
    -                        raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and python-dotenv is not installed")
    -                    self.api_key = os.environ["FIRECRAWL_API_KEY"]
    -        
    -        # Initialize FirecrawlApp
    -        try:
    -            from firecrawl import FirecrawlApp
    -        except ImportError:
    -            raise ImportError("firecrawl-py is not installed. Please install it with 'pip install firecrawl-py'")
    -    
    -    def scrape_website(self, url: str, timeout: int = 30000, only_main_content: bool = True, 
    -                      formats: List[str] = None, include_tags: List[str] = None, 
    -                      exclude_tags: List[str] = None, headers: Dict[str, str] = None, 
    -                      wait_for: int = 0) -> Any:
    -        """
    -        Scrape a website using Firecrawl API.
    -        
    -        Args:
    -            url: Website URL to scrape
    -            timeout: Timeout in milliseconds (default: 30000)
    -            only_main_content: Whether to extract only the main content (default: True)
    -            formats: Output formats (default: ["markdown"])
    -            include_tags: HTML tags to include in the extraction
    -            exclude_tags: HTML tags to exclude from the extraction
    -            headers: Custom HTTP headers to use for the request
    -            wait_for: Time to wait for JavaScript execution in milliseconds
    -            
    -        Returns:
    -            Scraped content from the website
    -        """
    -        # Set default values
    -        if formats is None:
    -            formats = ["markdown"]
    -        if include_tags is None:
    -            include_tags = []
    -        if exclude_tags is None:
    -            exclude_tags = []
    -        if headers is None:
    -            headers = {}
    -        
    -        # Prepare scrape options
    -        options = {
    -            "formats": formats,
    -            "onlyMainContent": only_main_content,
    -            "includeTags": include_tags,
    -            "excludeTags": exclude_tags,
    -            "headers": headers,
    -            "waitFor": wait_for,
    -            "timeout": timeout,
    -        }
    -        
    -        # Initialize FirecrawlApp and scrape the URL
    -        from firecrawl import FirecrawlApp
    -        _firecrawl = FirecrawlApp(api_key=self.api_key)
    -        
    -        return _firecrawl.scrape_url(url, **options)
    -
    -
    -class FirecrawlCrawlWebsiteTool:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for FirecrawlCrawlWebsiteTool and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "requests": False,
    -            "firecrawl": False,
    -            "python-dotenv": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import requests
    -            dependencies["requests"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            import firecrawl
    -            dependencies["firecrawl"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            from dotenv import load_dotenv
    -            dependencies["python-dotenv"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -    
    -    def __control__(self) -> bool:
    -        """
    -        Check if the required dependencies are installed and API key is available.
    -        
    -        Returns:
    -            True if all requirements are met
    -        
    -        Raises:
    -            ImportError: If required packages are not installed
    -            EnvironmentError: If API key is not available
    -        """
    -        # Analyze dependencies
    -        dependencies = self.analyze_dependencies()
    -        missing = [dep for dep, installed in dependencies.items() if not installed]
    -        
    -        # Print missing dependencies
    -        if missing:
    -            # Use the new printing function
    -            missing_dependencies("FirecrawlCrawlWebsiteTool", missing)
    -            
    -            # Raise ImportError with combined message for all missing dependencies
    -            install_cmd = "pip install " + " ".join(missing)
    -            raise ImportError(f"Missing dependencies: {', '.join(missing)}. Please install them with: {install_cmd}")
    -        
    -        # Check if FIRECRAWL_API_KEY is set in environment variables
    -        if "FIRECRAWL_API_KEY" not in os.environ:
    -            try:
    -                # Try to load API key from .env file
    -                api_key = FirecrawlCrawlWebsiteTool._load_api_key_from_env_file()
    -                if api_key is None:
    -                    # API key not found in .env file
    -                    missing_api_key("FirecrawlCrawlWebsiteTool", "FIRECRAWL_API_KEY")
    -                    raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and could not be found in .env file")
    -            except ImportError:
    -                # If dotenv is not installed, we can't load from .env file
    -                missing_api_key("FirecrawlCrawlWebsiteTool", "FIRECRAWL_API_KEY", dotenv_support=False)
    -                raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and python-dotenv is not installed")
    -        
    -        return True
    -    
    -    def __init__(self, api_key: Optional[str] = None):
    -        """
    -        Initialize the FirecrawlCrawlWebsiteTool.
    -        
    -        Args:
    -            api_key: Firecrawl API key (optional, will try to load from environment if not provided)
    -        """
    -        # Set API key
    -        self.api_key = api_key
    -        
    -        # If API key not provided, try to load from environment or .env file
    -        if self.api_key is None:
    -            # First check environment variables
    -            if "FIRECRAWL_API_KEY" in os.environ:
    -                self.api_key = os.environ["FIRECRAWL_API_KEY"]
    -            else:
    -                # Try to load from .env file
    -                try:
    -                    api_key = self._load_api_key_from_env_file()
    -                    if api_key:
    -                        self.api_key = api_key
    -                    else:
    -                        # Print missing API key message
    -                        missing_api_key("FirecrawlCrawlWebsiteTool", "FIRECRAWL_API_KEY")
    -                        raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and could not be found in .env file")
    -                except ImportError:
    -                    # If dotenv is not installed and no API key in environment
    -                    if "FIRECRAWL_API_KEY" not in os.environ:
    -                        # Print missing API key message without dotenv support
    -                        missing_api_key("FirecrawlCrawlWebsiteTool", "FIRECRAWL_API_KEY", dotenv_support=False)
    -                        raise EnvironmentError("FIRECRAWL_API_KEY environment variable is not set and python-dotenv is not installed")
    -                    self.api_key = os.environ["FIRECRAWL_API_KEY"]
    -        
    -        # Initialize FirecrawlApp
    -        try:
    -            from firecrawl import FirecrawlApp
    -        except ImportError:
    -            raise ImportError("firecrawl-py is not installed. Please install it with 'pip install firecrawl-py'")
    -    
    -    def crawl_website(self, url: str, crawler_options: Dict[str, Any] = None, timeout: int = 30000) -> Any:
    -        """
    -        Crawl a website using Firecrawl API.
    -        
    -        Args:
    -            url: Website URL to crawl
    -            crawler_options: Options for crawling (default: {})
    -            timeout: Timeout in milliseconds (default: 30000)
    -            
    -        Returns:
    -            Crawled content from the website
    -        """
    -        # Set default values
    -        if crawler_options is None:
    -            crawler_options = {}
    -        
    -        # Prepare crawl options
    -        options = {
    -            "crawlerOptions": crawler_options,
    -            "timeout": timeout,
    -        }
    -        
    -        # Initialize FirecrawlApp and crawl the URL
    -        from firecrawl import FirecrawlApp
    -        _firecrawl = FirecrawlApp(api_key=self.api_key)
    -        
    -        return _firecrawl.crawl_url(url, options)
    -
    -    @staticmethod
    -    def _load_api_key_from_env_file() -> Optional[str]:
    -        """
    -        Try to load the FIRECRAWL_API_KEY from a .env file using python-dotenv.
    -        
    -        Returns:
    -            The API key if found in .env file, None otherwise
    -        """
    -        try:
    -            # Try to import dotenv
    -            from dotenv import load_dotenv
    -        except ImportError:
    -            raise ImportError("python-dotenv is not installed. Please install it with 'pip install python-dotenv'")
    -        
    -        # Check for .env file in current directory and parent directories
    -        current_dir = pathlib.Path.cwd()
    -        
    -        # Look in current directory and up to 3 parent directories
    -        for _ in range(4):
    -            env_path = current_dir / '.env'
    -            if env_path.exists():
    -                # Load the .env file
    -                load_dotenv(dotenv_path=env_path)
    -                
    -                # Check if FIRECRAWL_API_KEY is now in environment
    -                if "FIRECRAWL_API_KEY" in os.environ:
    -                    return os.environ["FIRECRAWL_API_KEY"]
    -            
    -            # Move to parent directory
    -            parent_dir = current_dir.parent
    -            if parent_dir == current_dir:  # Reached root directory
    -                break
    -            current_dir = parent_dir
    -        
    -        return None
    -
    -
    -class YFinanceTool:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for YFinanceTool and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "yfinance": False,
    -            "pandas": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import yfinance
    -            dependencies["yfinance"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            import pandas
    -            dependencies["pandas"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -    
    -    def __control__(self) -> bool:
    -        """
    -        Check if the required dependencies are installed and print missing ones.
    -        
    -        Returns:
    -            True if all requirements are met
    -        
    -        Raises:
    -            ImportError: If required packages are not installed
    -        """
    -        # Analyze dependencies
    -        dependencies = self.analyze_dependencies()
    -        missing = [dep for dep, installed in dependencies.items() if not installed]
    -        
    -        # Print missing dependencies
    -        if missing:
    -            # Use the new printing function
    -            missing_dependencies("YFinanceTool", missing)
    -            
    -            # Raise ImportError with combined message for all missing dependencies
    -            install_cmd = "pip install " + " ".join(missing)
    -            raise ImportError(f"Missing dependencies: {', '.join(missing)}. Please install them with: {install_cmd}")
    -        
    -        return True
    -    
    -    def __init__(self):
    -        """
    -        Initialize the YFinanceTool.
    -        """
    -        # Check if dependencies are installed
    -        self.__control__()
    -    
    -    def get_ticker_info(self, ticker: str) -> Dict[str, Any]:
    -        """
    -        Get basic information about a ticker.
    -        
    -        Args:
    -            ticker: The ticker symbol (e.g., 'AAPL' for Apple)
    -            
    -        Returns:
    -            Dictionary containing basic information about the ticker
    -        """
    -        import yfinance as yf
    -        
    -        # Get ticker object
    -        ticker_obj = yf.Ticker(ticker)
    -        
    -        # Get basic info
    -        info = ticker_obj.info
    -        
    -        return info
    -    
    -    def get_historical_data(self, ticker: str, period: str = "1mo", interval: str = "1d") -> Dict[str, Any]:
    -        """
    -        Get historical market data for a ticker.
    -        
    -        Args:
    -            ticker: The ticker symbol (e.g., 'AAPL' for Apple)
    -            period: The period to fetch data for (default: '1mo')
    -                Valid periods: 1d, 5d, 1mo, 3mo, 6mo, 1y, 2y, 5y, 10y, ytd, max
    -            interval: The interval between data points (default: '1d')
    -                Valid intervals: 1m, 2m, 5m, 15m, 30m, 60m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo
    -            
    -        Returns:
    -            Dictionary containing historical data
    -        """
    -        import yfinance as yf
    -        import pandas as pd
    -        
    -        # Get ticker object
    -        ticker_obj = yf.Ticker(ticker)
    -        
    -        # Get historical data
    -        hist = ticker_obj.history(period=period, interval=interval)
    -        
    -        # Convert DataFrame to dictionary
    -        hist_dict = hist.reset_index().to_dict(orient='records')
    -        
    -        return {
    -            "data": hist_dict,
    -            "period": period,
    -            "interval": interval,
    -            "ticker": ticker
    -        }
    -    
    -    def get_financials(self, ticker: str) -> Dict[str, Any]:
    -        """
    -        Get financial statements for a ticker.
    -        
    -        Args:
    -            ticker: The ticker symbol (e.g., 'AAPL' for Apple)
    -            
    -        Returns:
    -            Dictionary containing financial statements
    -        """
    -        import yfinance as yf
    -        import pandas as pd
    -        
    -        # Get ticker object
    -        ticker_obj = yf.Ticker(ticker)
    -        
    -        # Get financial statements
    -        income_stmt = ticker_obj.income_stmt
    -        balance_sheet = ticker_obj.balance_sheet
    -        cash_flow = ticker_obj.cashflow
    -        
    -        # Convert DataFrames to dictionaries
    -        income_stmt_dict = income_stmt.reset_index().to_dict(orient='records') if not income_stmt.empty else []
    -        balance_sheet_dict = balance_sheet.reset_index().to_dict(orient='records') if not balance_sheet.empty else []
    -        cash_flow_dict = cash_flow.reset_index().to_dict(orient='records') if not cash_flow.empty else []
    -        
    -        return {
    -            "income_statement": income_stmt_dict,
    -            "balance_sheet": balance_sheet_dict,
    -            "cash_flow": cash_flow_dict,
    -            "ticker": ticker
    -        }
    -    
    -    def search_tickers(self, query: str, limit: int = 10) -> List[Dict[str, str]]:
    -        """
    -        Search for ticker symbols based on a query.
    -        
    -        Args:
    -            query: The search query (e.g., 'Apple')
    -            limit: Maximum number of results to return (default: 10)
    -            
    -        Returns:
    -            List of dictionaries containing ticker symbols and company names
    -        """
    -        import yfinance as yf
    -        
    -        try:
    -            # Use yfinance's search functionality
    -            tickers = yf.Tickers(query)
    -            
    -            # Get the tickers that were found
    -            found_tickers = list(tickers.tickers.keys())
    -            
    -            # Limit the number of results
    -            found_tickers = found_tickers[:limit]
    -            
    -            # Get info for each ticker
    -            result = []
    -            for ticker_symbol in found_tickers:
    -                try:
    -                    ticker_obj = yf.Ticker(ticker_symbol)
    -                    info = ticker_obj.info
    -                    result.append({
    -                        "symbol": ticker_symbol,
    -                        "name": info.get("shortName", "Unknown"),
    -                        "exchange": info.get("exchange", "Unknown"),
    -                        "industry": info.get("industry", "Unknown")
    -                    })
    -                except Exception as e:
    -                    # Skip tickers that cause errors
    -                    continue
    -            
    -            return result
    -        except Exception as e:
    -            # If the search fails, return an empty list
    -            return []
    -
    -
    -class ArxivTool:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for ArxivTool and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "arxiv": False,
    -            "requests": False,
    -            "PyPDF2": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import arxiv
    -            dependencies["arxiv"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            import requests
    -            dependencies["requests"] = True
    -        except ImportError:
    -            pass
    -        
    -        try:
    -            import PyPDF2
    -            dependencies["PyPDF2"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -    
    -    def __control__(self) -> bool:
    -        """
    -        Check if the required dependencies are installed and print missing ones.
    -        
    -        Returns:
    -            True if all requirements are met
    -        
    -        Raises:
    -            ImportError: If required packages are not installed
    -        """
    -        # Analyze dependencies
    -        dependencies = self.analyze_dependencies()
    -        missing = [dep for dep, installed in dependencies.items() if not installed]
    -        
    -        # Print missing dependencies
    -        if missing:
    -            # Use the new printing function
    -            missing_dependencies("ArxivTool", missing)
    -            
    -            # Raise ImportError with combined message for all missing dependencies
    -            install_cmd = "pip install " + " ".join(missing)
    -            raise ImportError(f"Missing dependencies: {', '.join(missing)}. Please install them with: {install_cmd}")
    -        
    -        return True
    -    
    -    def __init__(self):
    -        """
    -        Initialize the ArxivTool.
    -        """
    -        # Check if dependencies are installed
    -        self.__control__()
    -    
    -    def search(self, query: str, max_results: int = 5, sort_by: str = "relevance", sort_order: str = "descending") -> List[Dict[str, Any]]:
    -        """
    -        Search for papers on arXiv.
    -        
    -        Args:
    -            query: The search query
    -            max_results: Maximum number of results to return (default: 5)
    -            sort_by: Sort results by 'relevance', 'lastUpdatedDate', or 'submittedDate' (default: 'relevance')
    -            sort_order: Sort order, either 'ascending' or 'descending' (default: 'descending')
    -            
    -        Returns:
    -            List of dictionaries containing paper information
    -        """
    -        import arxiv
    -        
    -        # Map sort_by to arxiv.SortCriterion
    -        sort_criteria = {
    -            "relevance": arxiv.SortCriterion.Relevance,
    -            "lastUpdatedDate": arxiv.SortCriterion.LastUpdatedDate,
    -            "submittedDate": arxiv.SortCriterion.SubmittedDate
    -        }
    -        
    -        # Map sort_order to arxiv.SortOrder
    -        sort_orders = {
    -            "ascending": arxiv.SortOrder.Ascending,
    -            "descending": arxiv.SortOrder.Descending
    -        }
    -        
    -        # Set default values if invalid options are provided
    -        if sort_by not in sort_criteria:
    -            sort_by = "relevance"
    -        if sort_order not in sort_orders:
    -            sort_order = "descending"
    -        
    -        # Create search client
    -        client = arxiv.Client()
    -        
    -        # Create search query
    -        search = arxiv.Search(
    -            query=query,
    -            max_results=max_results,
    -            sort_by=sort_criteria[sort_by],
    -            sort_order=sort_orders[sort_order]
    -        )
    -        
    -        # Execute search
    -        results = list(client.results(search))
    -        
    -        # Convert results to dictionaries
    -        papers = []
    -        for paper in results:
    -            papers.append({
    -                "title": paper.title,
    -                "authors": [author.name for author in paper.authors],
    -                "summary": paper.summary,
    -                "published": paper.published.strftime("%Y-%m-%d") if paper.published else None,
    -                "updated": paper.updated.strftime("%Y-%m-%d") if paper.updated else None,
    -                "doi": paper.doi,
    -                "primary_category": paper.primary_category,
    -                "categories": paper.categories,
    -                "links": [link.href for link in paper.links],
    -                "pdf_url": paper.pdf_url,
    -                "entry_id": paper.entry_id
    -            })
    -        
    -        return papers
    -    
    -    def get_paper_by_id(self, paper_id: str) -> Dict[str, Any]:
    -        """
    -        Get a specific paper by its arXiv ID.
    -        
    -        Args:
    -            paper_id: The arXiv ID of the paper (e.g., '2106.09685')
    -            
    -        Returns:
    -            Dictionary containing paper information
    -        """
    -        import arxiv
    -        
    -        # Create client
    -        client = arxiv.Client()
    -        
    -        # Search for the specific paper
    -        search = arxiv.Search(id_list=[paper_id])
    -        
    -        # Get the paper
    -        results = list(client.results(search))
    -        
    -        if not results:
    -            return {"error": f"Paper with ID {paper_id} not found"}
    -        
    -        paper = results[0]
    -        
    -        # Convert to dictionary
    -        paper_dict = {
    -            "title": paper.title,
    -            "authors": [author.name for author in paper.authors],
    -            "summary": paper.summary,
    -            "published": paper.published.strftime("%Y-%m-%d") if paper.published else None,
    -            "updated": paper.updated.strftime("%Y-%m-%d") if paper.updated else None,
    -            "doi": paper.doi,
    -            "primary_category": paper.primary_category,
    -            "categories": paper.categories,
    -            "links": [link.href for link in paper.links],
    -            "pdf_url": paper.pdf_url,
    -            "entry_id": paper.entry_id
    -        }
    -        
    -        return paper_dict
    -    
    -    def download_paper(self, paper_id: str, output_dir: str = "./") -> Dict[str, Any]:
    -        """
    -        Download a paper's PDF by its arXiv ID.
    -        
    -        Args:
    -            paper_id: The arXiv ID of the paper (e.g., '2106.09685')
    -            output_dir: Directory to save the PDF (default: current directory)
    -            
    -        Returns:
    -            Dictionary containing download information
    -        """
    -        import arxiv
    -        import os
    -        import requests
    -        
    -        # Create client
    -        client = arxiv.Client()
    -        
    -        # Search for the specific paper
    -        search = arxiv.Search(id_list=[paper_id])
    -        
    -        # Get the paper
    -        results = list(client.results(search))
    -        
    -        if not results:
    -            return {"error": f"Paper with ID {paper_id} not found"}
    -        
    -        paper = results[0]
    -        
    -        # Create output directory if it doesn't exist
    -        os.makedirs(output_dir, exist_ok=True)
    -        
    -        # Generate filename
    -        filename = f"{paper_id.replace('/', '_')}.pdf"
    -        filepath = os.path.join(output_dir, filename)
    -        
    -        # Download the PDF
    -        try:
    -            response = requests.get(paper.pdf_url)
    -            response.raise_for_status()
    -            
    -            with open(filepath, 'wb') as f:
    -                f.write(response.content)
    -            
    -            return {
    -                "success": True,
    -                "paper_id": paper_id,
    -                "title": paper.title,
    -                "filepath": filepath,
    -                "pdf_url": paper.pdf_url
    -            }
    -        except Exception as e:
    -            return {
    -                "success": False,
    -                "paper_id": paper_id,
    -                "error": str(e),
    -                "pdf_url": paper.pdf_url
    -            }
    -    
    -    def read_paper(self, paper_id: str, max_pages: int = None) -> Dict[str, Any]:
    -        """
    -        Read a paper's content directly by its arXiv ID.
    -        
    -        Args:
    -            paper_id: The arXiv ID of the paper (e.g., '2106.09685')
    -            max_pages: Maximum number of pages to read (default: None, reads all pages)
    -            
    -        Returns:
    -            Dictionary containing the paper's content and metadata
    -        """
    -        import arxiv
    -        import requests
    -        import tempfile
    -        import os
    -        import PyPDF2
    -        
    -        # Create client
    -        client = arxiv.Client()
    -        
    -        # Search for the specific paper
    -        search = arxiv.Search(id_list=[paper_id])
    -        
    -        # Get the paper
    -        results = list(client.results(search))
    -        
    -        if not results:
    -            return {"error": f"Paper with ID {paper_id} not found"}
    -        
    -        paper = results[0]
    -        
    -        # Create a temporary directory
    -        with tempfile.TemporaryDirectory() as temp_dir:
    -            # Generate filename
    -            filename = f"{paper_id.replace('/', '_')}.pdf"
    -            filepath = os.path.join(temp_dir, filename)
    -            
    -            # Download the PDF
    -            try:
    -                response = requests.get(paper.pdf_url)
    -                response.raise_for_status()
    -                
    -                with open(filepath, 'wb') as f:
    -                    f.write(response.content)
    -                
    -                # Read the PDF content
    -                with open(filepath, 'rb') as f:
    -                    pdf_reader = PyPDF2.PdfReader(f)
    -                    
    -                    # Get number of pages
    -                    num_pages = len(pdf_reader.pages)
    -                    
    -                    # Limit pages if max_pages is specified
    -                    if max_pages is not None:
    -                        num_pages = min(num_pages, max_pages)
    -                    
    -                    # Extract text from pages with better handling
    -                    content = []
    -                    for page_num in range(num_pages):
    -                        try:
    -                            page = pdf_reader.pages[page_num]
    -                            page_text = page.extract_text()
    -                            
    -                            # Skip empty pages or pages with very little content
    -                            if page_text and len(page_text.strip()) > 20:
    -                                # Clean up the text
    -                                page_text = page_text.replace('\n\n', '\n')
    -                                content.append(f"--- Page {page_num + 1} ---\n{page_text}")
    -                        except Exception as e:
    -                            content.append(f"--- Page {page_num + 1} ---\n[Error extracting text: {str(e)}]")
    -                    
    -                    # Join all pages with clear separation
    -                    full_content = "\n\n".join(content)
    -                    
    -                    # If we couldn't extract meaningful content, try an alternative approach
    -                    if not full_content or len(full_content.strip()) < 100:
    -                        try:
    -                            # Alternative extraction method
    -                            full_content = "Content could not be extracted properly. Please try downloading the paper directly."
    -                            
    -                            # Include the abstract as a fallback
    -                            full_content = f"Abstract:\n{paper.summary}\n\n{full_content}"
    -                        except Exception:
    -                            full_content = "Failed to extract content from the PDF. Please download the paper directly."
    -                
    -                return {
    -                    "success": True,
    -                    "paper_id": paper_id,
    -                    "title": paper.title,
    -                    "authors": [author.name for author in paper.authors],
    -                    "summary": paper.summary,
    -                    "content": full_content,
    -                    "total_pages": len(pdf_reader.pages),
    -                    "pages_read": num_pages,
    -                    "pdf_url": paper.pdf_url
    -                }
    -            except Exception as e:
    -                # If we failed to process the PDF, return the summary at least
    -                return {
    -                    "success": False,
    -                    "paper_id": paper_id,
    -                    "title": paper.title,
    -                    "authors": [author.name for author in paper.authors],
    -                    "summary": paper.summary,
    -                    "error": str(e),
    -                    "pdf_url": paper.pdf_url,
    -                    "note": "Failed to extract content. You can still access the paper directly using the pdf_url."
    -                }
    -
    -
    -class YouTubeVideo:
    -    @staticmethod
    -    def analyze_dependencies() -> Dict[str, bool]:
    -        """
    -        Analyze the dependencies required for YouTubeVideo and return their status.
    -        
    -        Returns:
    -            Dictionary with dependency names as keys and their availability status as values
    -        """
    -        dependencies = {
    -            "youtube_transcript_api": False
    -        }
    -        
    -        # Check each dependency
    -        try:
    -            import youtube_transcript_api
    -            dependencies["youtube_transcript_api"] = True
    -        except ImportError:
    -            pass
    -        
    -        return dependencies
    -        
    -    @staticmethod
    -    def __control__() -> bool:
    -        # Check the import youtube_transcript_api
    -        try:
    -            import youtube_transcript_api
    -            return True
    -        except ImportError:
    -            # Use the missing_dependencies function to display the error
    -            missing_dependencies("YouTubeVideo", ["youtube_transcript_api"])
    -            raise ImportError("Missing dependency: youtube_transcript_api. Please install it with: pip install youtube_transcript_api")
    -    
    -    @staticmethod
    -    def get_video_id(url: str) -> Optional[str]:
    -        """
    -        Extract the YouTube video ID from a URL.
    -        
    -        Args:
    -            url: The URL of the YouTube video
    -            
    -        Returns:
    -            The video ID or None if not found
    -        """
    -        import re
    -        from urllib.parse import urlparse, parse_qs
    -        
    -        # Handle different YouTube URL formats
    -        parsed_url = urlparse(url)
    -        hostname = parsed_url.hostname
    -        
    -        if hostname == "youtu.be":
    -            return parsed_url.path[1:]
    -        
    -        if hostname in ("www.youtube.com", "youtube.com"):
    -            if parsed_url.path == "/watch":
    -                query_params = parse_qs(parsed_url.query)
    -                return query_params.get("v", [None])[0]
    -            if parsed_url.path.startswith("/embed/"):
    -                return parsed_url.path.split("/")[2]
    -            if parsed_url.path.startswith("/v/"):
    -                return parsed_url.path.split("/")[2]
    -        
    -        # Try to extract ID using regex as fallback
    -        youtube_regex = r"(?:youtube\.com\/(?:[^\/]+\/.+\/|(?:v|e(?:mbed)?)\/|.*[?&]v=)|youtu\.be\/)([^\"&?\/\s]{11})"
    -        match = re.search(youtube_regex, url)
    -        if match:
    -            return match.group(1)
    -            
    -        return None
    -    
    -    @staticmethod
    -    def get_captions(url: str, languages: List[str] = None) -> str:
    -        """
    -        Get captions/transcript from a YouTube video.
    -        
    -        Args:
    -            url: The URL of the YouTube video
    -            languages: List of language codes to try (default: ["en"])
    -            
    -        Returns:
    -            The video transcript as text
    -        """
    -        from youtube_transcript_api import YouTubeTranscriptApi
    -        
    -        # Default to English if no languages specified
    -        if not languages:
    -            languages = ["en"]
    -            
    -        video_id = YouTubeVideo.get_video_id(url)
    -        if not video_id:
    -            return "Er
    ... [truncated]
    
  • src/upsonic/tools_server/function_client.py+0 175 removed
    @@ -1,175 +0,0 @@
    -import httpx
    -from typing import Dict, List, Any, Callable, Optional
    -from functools import wraps
    -import inspect
    -
    -
    -class FunctionToolManager:
    -    """Client for interacting with the Upsonic Functions API."""
    -
    -    def __init__(self):
    -        """Initialize the Upsonic Function client."""
    -        self.base_url = "http://localhost:8086"
    -
    -    def get_tools_by_name(self, name: list[str]):
    -        """
    -        Get tools by name, supporting wildcard patterns.
    -        
    -        Args:
    -            name: List of tool names or patterns (e.g. ["FileSystem.*", "MyTools.*"])
    -            
    -        Returns:
    -            List of matching tools
    -        """
    -        matching_tools = []
    -        for tool in self.tools():
    -            tool_name = tool.__name__
    -            for pattern in name:
    -                # Handle wildcard pattern
    -                if pattern.endswith(".*"):
    -                    prefix = pattern[:-2]  # Remove .* from the end
    -                    if tool_name.startswith(prefix):
    -                        matching_tools.append(tool)
    -                        break
    -                # Exact match
    -                elif tool_name == pattern:
    -                    matching_tools.append(tool)
    -                    break
    -        return matching_tools
    -
    -    def __enter__(self):
    -        return self
    -
    -    def __exit__(self, exc_type, exc_val, exc_tb):
    -        pass
    -
    -    def close(self):
    -        """Close the client session."""
    -        pass
    -
    -    def list_tools(self) -> Dict[str, Any]:
    -        """List all available tools."""
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(f"{self.base_url}/functions/tools")
    -            response.raise_for_status()
    -            return response.json()
    -
    -    def call_tool(self, tool_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
    -        """
    -        Call a specific tool with the given arguments.
    -
    -        Args:
    -            tool_name: Name of the tool to call
    -            arguments: Dictionary of arguments to pass to the tool
    -
    -        Returns:
    -            Tool execution results
    -        """
    -        print("Tool Calling")
    -        print(tool_name)
    -        print(arguments)
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(
    -                f"{self.base_url}/functions/call_tool",
    -                json={"tool_name": tool_name, "arguments": arguments},
    -            )
    -            response.raise_for_status()
    -            print("return")
    -            print(response.json())
    -            return response.json()
    -
    -    def tools(self) -> List[Callable[..., Dict[str, Any]]]:
    -        """Initialize tool-specific methods based on available tools."""
    -        tools_response = self.list_tools()
    -
    -
    -
    -        tools = tools_response.get("available_tools", {}).get("tools", [])
    -
    -        functions: List[Callable[..., Dict[str, Any]]] = []
    -
    -        def get_python_type(schema_type: str, format: Optional[str] = None) -> type:
    -            """Convert JSON schema type to Python type."""
    -            type_mapping = {
    -                "string": str,
    -                "integer": int,
    -                "boolean": bool,
    -                "number": float,
    -                "array": list,
    -                "object": dict,
    -            }
    -            return type_mapping.get(schema_type, Any)
    -
    -        for tool in tools:
    -            tool_name: str = tool["name"]
    -            tool_desc: str = tool.get("description", "")
    -            input_schema: Dict[str, Any] = tool.get("inputSchema", {})
    -            properties: Dict[str, Dict[str, Any]] = input_schema.get("properties", {})
    -            required: List[str] = input_schema.get("required", [])
    -
    -            def create_tool_function(
    -                tool_name: str,
    -                properties: Dict[str, Dict[str, Any]],
    -                required: List[str],
    -            ) -> Callable[..., Dict[str, Any]]:
    -                annotations = {}
    -                defaults = {}
    -                parameters = []
    -
    -                # Build parameters for both required and optional arguments
    -                for param_name in required:
    -                    param_info = properties[param_name]
    -                    param_type = get_python_type(param_info.get("type", "any"))
    -                    annotations[param_name] = param_type
    -                    parameters.append(
    -                        inspect.Parameter(
    -                            param_name,
    -                            inspect.Parameter.POSITIONAL_OR_KEYWORD,
    -                            annotation=param_type
    -                        )
    -                    )
    -
    -                for param_name, param_info in properties.items():
    -                    if param_name not in required:
    -                        param_type = get_python_type(param_info.get("type", "any"))
    -                        annotations[param_name] = param_type
    -                        default_value = param_info.get("default", None)
    -                        defaults[param_name] = default_value
    -                        parameters.append(
    -                            inspect.Parameter(
    -                                param_name,
    -                                inspect.Parameter.POSITIONAL_OR_KEYWORD,
    -                                default=default_value,
    -                                annotation=param_type
    -                            )
    -                        )
    -
    -                def tool_function(*args: Any, **kwargs: Any) -> Dict[str, Any]:
    -                    all_kwargs = kwargs.copy()
    -                    for i, arg in enumerate(args):
    -                        if i < len(required):
    -                            all_kwargs[required[i]] = arg
    -
    -                    for param, default in defaults.items():
    -                        if param not in all_kwargs:
    -                            all_kwargs[param] = default
    -
    -                    return self.call_tool(tool_name, all_kwargs)
    -
    -                # Create a signature object and apply it to the function
    -                sig = inspect.Signature(parameters=parameters, return_annotation=Dict[str, Any])
    -                tool_function.__signature__ = sig
    -                tool_function.__name__ = tool_name
    -                tool_function.__annotations__ = {
    -                    **annotations,
    -                    "return": Dict[str, Any],
    -                }
    -                tool_function.__doc__ = f"{tool_desc}\n\nReturns:\n    Tool execution results"
    -
    -                return tool_function
    -
    -
    -            func = create_tool_function(tool_name, properties, required)
    -            functions.append(func)
    -
    -        return functions
    
  • src/upsonic/tools_server/__init__.py+0 31 removed
    @@ -1,31 +0,0 @@
    -from ..server_manager import ServerManager
    -from multiprocessing import freeze_support
    -
    -_server_manager = ServerManager(
    -    app_path="upsonic.tools_server.server.api:app",
    -    host="localhost",
    -    port=8086,
    -    name="tools"
    -)
    -
    -def run_tools_server(redirect_output: bool = False):
    -    """Start the tools server if it's not already running."""
    -    _server_manager.start(redirect_output=redirect_output)
    -
    -def run_tools_server_internal(reload: bool = True):
    -    """Run the tools server directly (for development)"""
    -    import uvicorn
    -    uvicorn.run("upsonic.tools_server.server.api:app", host="localhost", port=8086, reload=reload)
    -
    -def stop_tools_server():
    -    """Stop the tools server if it's running."""
    -    _server_manager.stop()
    -
    -def is_tools_server_running() -> bool:
    -    """Check if the tools server is currently running."""
    -    return _server_manager.is_running()
    -
    -if __name__ == '__main__':
    -    freeze_support()
    -
    -__all__ = ["run_tools_server", "stop_tools_server", "is_tools_server_running", "app", "run_tools_server_internal"]
    
  • src/upsonic/tools_server/server/api.py+0 92 removed
    @@ -1,92 +0,0 @@
    -from fastapi import FastAPI, HTTPException, Request, Response
    -import asyncio
    -from functools import wraps
    -from ...exception import TimeoutException
    -import inspect
    -from starlette.responses import JSONResponse
    -import threading
    -import time
    -import logging
    -
    -# Configure logging
    -logging.basicConfig(level=logging.ERROR)
    -logger = logging.getLogger(__name__)
    -
    -app = FastAPI()
    -
    -# Remove the middleware and use exception handlers instead
    -@app.exception_handler(Exception)
    -async def exception_handler(request: Request, exc: Exception):
    -    logging.error(f"Error: {exc}", exc_info=True)
    -    return JSONResponse(
    -        status_code=500,
    -        content={"detail": str(exc)}
    -    )
    -
    -# Import the cleanup function from server_utils instead of tools
    -from .server_utils import cleanup_all_servers
    -
    -@app.on_event("shutdown")
    -async def shutdown_event():
    -    """
    -    Clean up all server instances when the application shuts down.
    -    """
    -    await cleanup_all_servers()
    -
    -
    -async def timeout_handler(duration: float, coro):
    -    try:
    -        return await asyncio.wait_for(coro, timeout=duration)
    -    except asyncio.TimeoutError:
    -        raise TimeoutException(f"Operation timed out after {duration} seconds")
    -
    -def timeout(seconds: float):
    -    def decorator(func):
    -        @wraps(func)
    -        async def async_wrapper(*args, **kwargs):
    -            try:
    -                # Create a task for the function
    -                task = asyncio.create_task(func(*args, **kwargs))
    -                # Wait for the task to complete with timeout
    -                result = await asyncio.wait_for(task, timeout=seconds)
    -                return result
    -            except asyncio.TimeoutError:
    -                raise HTTPException(
    -                    status_code=408,
    -                    detail=f"Function timed out after {seconds} seconds"
    -                )
    -
    -        @wraps(func)
    -        def sync_wrapper(*args, **kwargs):
    -            # For synchronous functions, we'll use a thread-based approach
    -            result = []
    -            error = []
    -            
    -            def target():
    -                try:
    -                    result.append(func(*args, **kwargs))
    -                except Exception as e:
    -                    error.append(e)
    -            
    -            thread = threading.Thread(target=target)
    -            thread.daemon = True
    -            thread.start()
    -            thread.join(timeout=seconds)  # Wait for the specified timeout
    -            
    -            if thread.is_alive():
    -                raise HTTPException(
    -                    status_code=408,
    -                    detail=f"Function timed out after {seconds} seconds"
    -                )
    -            
    -            if error:
    -                raise error[0]
    -            
    -            return result[0]
    -
    -        return async_wrapper if inspect.iscoroutinefunction(func) else sync_wrapper
    -    return decorator
    -
    -@app.get("/status")
    -async def get_status():
    -    return {"status": "Server is running"}
    
  • src/upsonic/tools_server/server/function_tools.py+0 249 removed
    @@ -1,249 +0,0 @@
    -import traceback
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -import inspect
    -from typing import Any, Dict, List, Type, Callable
    -from functools import wraps
    -
    -from .api import app, timeout
    -
    -prefix = "/functions"
    -
    -# Registry to store decorated functions
    -registered_functions: Dict[str, Dict[str, Any]] = {}
    -
    -
    -def _get_json_type(python_type: Type) -> str:
    -    """Convert Python type to JSON schema type."""
    -    type_mapping = {
    -        str: "string",
    -        int: "integer",
    -        bool: "boolean",
    -        float: "number",
    -        list: "array",
    -        dict: "object",
    -    }
    -    return type_mapping.get(python_type, "string")
    -
    -
    -def tool(description: str = "", custom_properties: Dict[str, Any] = None, custom_required: List[str] = None):
    -    """
    -    Decorator to register a function as a tool.
    -
    -    Args:
    -        description: Optional description of the tool. If not provided, function's docstring will be used.
    -    """
    -
    -    def decorator(func: Callable):
    -        sig = inspect.signature(func)
    -
    -
    -        # Get parameter info
    -        properties = {}
    -        required = []
    -
    -        # Extract description from docstring if not provided
    -        tool_description = description
    -        if not tool_description and func.__doc__:
    -            # Get the first line of the docstring as description
    -            tool_description = func.__doc__.strip().split('\n')[0].strip()
    -
    -
    -        
    -        for param_name, param in sig.parameters.items():
    -            param_type = (
    -                param.annotation if param.annotation != inspect.Parameter.empty else Any
    -            )
    -            param_default = (
    -                None if param.default == inspect.Parameter.empty else param.default
    -            )
    -
    -            properties[param_name] = {
    -                "type": _get_json_type(param_type),
    -                "description": f"Parameter {param_name}",
    -            }
    -
    -            if param_default is not None:
    -                properties[param_name]["default"] = param_default
    -            
    -            # If parameter has no default value, it's required
    -            if param.default == inspect.Parameter.empty:
    -                required.append(param_name)
    -
    -
    -        if custom_properties is not None:
    -            properties = custom_properties
    -
    -        if custom_required is not None:
    -            required = custom_required
    -
    -        # Register the function with the extracted description
    -        registered_functions[func.__name__] = {
    -            "function": func,
    -            "description": tool_description,
    -            "properties": properties,
    -            "required": required,
    -        }
    -
    -        # Check if the function is async
    -        is_async = inspect.iscoroutinefunction(func)
    -
    -        if is_async:
    -            @wraps(func)
    -            async def wrapper(*args, **kwargs):
    -                return await func(*args, **kwargs)
    -        else:
    -            @wraps(func)
    -            def wrapper(*args, **kwargs):
    -                return func(*args, **kwargs)
    -
    -        return wrapper
    -
    -    return decorator
    -
    -
    -class ToolRequest(BaseModel):
    -    tool_name: str
    -    arguments: dict
    -
    -
    -@app.post(f"{prefix}/tools")
    -@timeout(30.0)
    -async def list_tools():
    -
    -    tools = []
    -    for name, info in registered_functions.items():
    -        # Truncate description if it's longer than 1024 characters
    -        description = info["description"]
    -        if len(description) > 1024:
    -            description = description[:1020] + "..."
    -            
    -        tools.append(
    -            {
    -                "name": name,
    -                "description": description,
    -                "inputSchema": {
    -                    "type": "object",
    -                    "properties": info["properties"],
    -                    "required": info["required"],
    -                },
    -            }
    -        )
    -
    -    print(tools)    
    -    return {"available_tools": {"tools": tools}}
    -
    -
    -@app.post(f"{prefix}/call_tool")
    -@timeout(30.0)
    -async def call_tool(request: ToolRequest):
    -
    -    print("Calling tool")
    -    print(request)
    -
    -    if request.tool_name not in registered_functions:
    -        raise HTTPException(
    -            status_code=404, detail=f"Tool {request.tool_name} not found"
    -        )
    -
    -    try:
    -        func = registered_functions[request.tool_name]["function"]
    -        # Check if the function is async
    -        is_async = inspect.iscoroutinefunction(func)
    -        
    -        if is_async:
    -            result = await func(**request.arguments)
    -        else:
    -            result = func(**request.arguments)
    -            
    -        print("Tool result")
    -        print(result)
    -
    -        return {"result": result}
    -    except Exception as e:
    -        traceback.print_exc()
    -        return {"status_code": 500, "detail": f"Failed to call tool: {str(e)}"}
    -
    -
    -# Example decorated functions
    -@tool()
    -async def add_numbers(a: int, b: int, c: int=0) -> int:
    -    "Add two numbers together"
    -    return a + b + c
    -
    -
    -@tool()
    -def concat_strings(str1: str, str2: str) -> str:
    -    "Concatenate two strings"
    -    return str1 + str2
    -
    -
    -@tool()
    -def Search__duckduckgo(query: str, max_results: int = 20) -> list:
    -    """
    -    Search the query on DuckDuckGo and return the results.
    -    """
    -    try:
    -        from duckduckgo_search import DDGS
    -
    -        return list(DDGS().text(query, max_results=max_results))
    -    except:
    -        return "An exception occurred"
    -    
    -
    -import re
    -import requests
    -from bs4 import BeautifulSoup
    -from urllib.parse import urljoin
    -@tool()
    -def Search__read_website(url: str, max_content_length: int = 5000) -> dict:
    -    """
    -    Read the content of a website and return the title, meta data, content, and sub-links.
    -    """
    -    try:
    -        response = requests.get(url, timeout=10.0)
    -        response.raise_for_status()
    -        html = response.text
    -    except requests.RequestException as e:
    -        return {"error": f"Failed to retrieve the website content: {e}"}
    -
    -    soup = BeautifulSoup(html, "html.parser")
    -
    -    meta_properties = [
    -        "og:description",
    -        "og:site_name",
    -        "og:title",
    -        "og:type",
    -        "og:url",
    -        "description",
    -        "keywords",
    -        "author",
    -    ]
    -    meta = {}
    -    for property_name in meta_properties:
    -        tag = soup.find("meta", property=property_name) or soup.find(
    -            "meta", attrs={"name": property_name}
    -        )
    -        if tag:
    -            meta[property_name] = tag.get("content", "")
    -
    -    for ignore_tag in soup(["script", "style"]):
    -        ignore_tag.decompose()
    -
    -    title = soup.title.string.strip() if soup.title else ""
    -    content = soup.body.get_text(separator="\n") if soup.body else ""
    -
    -    links = []
    -    for a in soup.find_all("a", href=True):
    -        link_url = urljoin(url, a["href"])
    -        links.append({"title": a.text.strip(), "link": link_url})
    -
    -    content = re.sub(r"[\n\r\t]+", "\n", content)
    -    content = re.sub(r" +", " ", content)
    -    content = re.sub(r"[\n ]{3,}", "\n\n", content)
    -    content = content.strip()
    -
    -    if len(content) > max_content_length:
    -        content = content[:max_content_length].rsplit(" ", 1)[0] + "..."
    -
    -    return {"meta": meta, "title": title, "content": content, "sub_links": links}
    \ No newline at end of file
    
  • src/upsonic/tools_server/server/__init__.py+0 6 removed
    @@ -1,6 +0,0 @@
    -from .api import app
    -
    -from .function_tools import *
    -from .tools import *
    -
    -__all__ = ["app"]
    
  • src/upsonic/tools_server/server/server_utils.py+0 29 removed
    @@ -1,29 +0,0 @@
    -import logging
    -from typing import Dict, Any, List, Optional
    -
    -# Global dictionary to store server instances
    -# Key: (name, command, tuple(args), frozenset(env.items()))
    -# Value: Server instance
    -_server_instances = {}
    -
    -async def cleanup_all_servers():
    -    """
    -    Clean up all server instances.
    -    This should be called when the application is shutting down.
    -    """
    -    if not _server_instances:
    -        logging.info("No server instances to clean up")
    -        return
    -        
    -    logging.info(f"Cleaning up {len(_server_instances)} server instances")
    -    # We need to import Server here to avoid circular imports
    -    # This is safe because cleanup_all_servers is only called during shutdown
    -    for server in list(_server_instances.values()):
    -        try:
    -            await server.cleanup()
    -        except Exception as e:
    -            logging.error(f"Error cleaning up server {server.name}: {e}")
    -    
    -    # Clear the dictionary just to be sure
    -    _server_instances.clear()
    -    logging.info("All server instances cleaned up") 
    \ No newline at end of file
    
  • src/upsonic/tools_server/server/tools.py+0 752 removed
    @@ -1,752 +0,0 @@
    -import base64
    -import inspect
    -import subprocess
    -import traceback
    -import asyncio
    -import logging
    -import os
    -import shutil
    -from typing import List, Dict, Any, Optional, Union, Callable
    -from contextlib import AsyncExitStack, asynccontextmanager
    -from pydantic import BaseModel
    -
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -from mcp import ClientSession, StdioServerParameters
    -from mcp.client.stdio import stdio_client
    -from mcp.client.stdio import get_default_environment
    -from mcp.client.sse import sse_client
    -
    -# Configure logging
    -logging.basicConfig(
    -    level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
    -)
    -
    -# Import the shared server instances dictionary
    -from .server_utils import _server_instances
    -
    -class Server:
    -    """Manages MCP server connections and tool execution."""
    -
    -    def __init__(self, command: str, args: list, env: dict | None = None, name: str = "default") -> None:
    -        """Initialize a server with connection parameters.
    -        
    -        Args:
    -            command: The command to execute.
    -            args: Arguments for the command.
    -            env: Environment variables for the command.
    -            name: A name for this server instance.
    -        """
    -        self.name: str = name
    -        self.command: str = command
    -        self.args: list = args
    -        
    -        if env is None:
    -            self.env = get_default_environment()
    -        else:
    -            default_env = get_default_environment()
    -            default_env.update(env)
    -            self.env = default_env
    -            
    -        self.session: ClientSession | None = None
    -        self._cleanup_lock: asyncio.Lock = asyncio.Lock()
    -        self.exit_stack: AsyncExitStack = AsyncExitStack()
    -
    -    async def initialize(self) -> None:
    -        """Initialize the server connection."""
    -        if self.command is None:
    -            raise ValueError("The command must be a valid string and cannot be None.")
    -
    -        server_params = StdioServerParameters(
    -            command=self.command,
    -            args=self.args,
    -            env=self.env,
    -        )
    -        
    -        try:
    -            stdio_transport = await self.exit_stack.enter_async_context(
    -                stdio_client(server_params)
    -            )
    -            read, write = stdio_transport
    -            session = await self.exit_stack.enter_async_context(
    -                ClientSession(read, write)
    -            )
    -            await session.initialize()
    -            self.session = session
    -            logging.info(f"Server {self.name} initialized successfully")
    -        except Exception as e:
    -            logging.error(f"Error initializing server {self.name}: {e}")
    -            await self.cleanup()
    -            raise
    -
    -    async def execute_tool(
    -        self,
    -        tool_name: str,
    -        arguments: dict[str, Any],
    -        retries: int = 2,
    -        delay: float = 1.0,
    -    ) -> Any:
    -    
    -        """Execute a tool with retry mechanism.
    -
    -        Args:
    -            tool_name: Name of the tool to execute.
    -            arguments: Tool arguments.
    -            retries: Number of retry attempts.
    -            delay: Delay between retries in seconds.
    -
    -        Returns:
    -            Tool execution result.
    -
    -        Raises:
    -            RuntimeError: If server is not initialized.
    -            Exception: If tool execution fails after all retries.
    -        """
    -        if not self.session:
    -            raise RuntimeError(f"Server {self.name} not initialized")
    -
    -        attempt = 0
    -        while attempt < retries + 1:  # +1 because first attempt is not a retry
    -            try:
    -                logging.info(f"Executing {tool_name}...")
    -                result = await self.session.call_tool(tool_name, arguments)
    -                return result
    -            except Exception as e:
    -                attempt += 1
    -                if attempt <= retries:
    -                    logging.warning(
    -                        f"Error executing tool: {e}. Attempt {attempt} of {retries}."
    -                    )
    -                    logging.info(f"Retrying in {delay} seconds...")
    -                    await asyncio.sleep(delay)
    -                else:
    -                    logging.error("Max retries reached. Failing.")
    -                    raise
    -
    -    async def list_tools(self) -> Any:
    -        """List available tools from the server.
    -
    -        Returns:
    -            A list of available tools.
    -
    -        Raises:
    -            RuntimeError: If the server is not initialized.
    -        """
    -        if not self.session:
    -            raise RuntimeError(f"Server {self.name} not initialized")
    -
    -        return await self.session.list_tools()
    -
    -    async def cleanup(self) -> None:
    -        """Clean up server resources."""
    -        async with self._cleanup_lock:
    -            try:
    -                await self.exit_stack.aclose()
    -                self.session = None
    -                
    -                # Remove this server from the global instances dictionary
    -                for key, srv in list(_server_instances.items()):
    -                    if srv is self:
    -                        del _server_instances[key]
    -                        logging.info(f"Removed server {self.name} from instances registry")
    -                        break
    -            except Exception as e:
    -                logging.error(f"Error during cleanup of server {self.name}: {e}")
    -
    -class SSEServer:
    -    """Manages SSE-based MCP server connections and tool execution."""
    -
    -    def __init__(self, url: str, name: str = "default") -> None:
    -        """Initialize an SSE server with connection parameters.
    -        
    -        Args:
    -            url: The SSE server URL.
    -            name: A name for this server instance.
    -        """
    -        self.name: str = name
    -        self.url: str = url
    -        self.session: ClientSession | None = None
    -        self._cleanup_lock: asyncio.Lock = asyncio.Lock()
    -        self.exit_stack: AsyncExitStack = AsyncExitStack()
    -
    -    async def initialize(self) -> None:
    -        """Initialize the SSE server connection."""
    -        try:
    -            sse_transport = await self.exit_stack.enter_async_context(
    -                sse_client(self.url)
    -            )
    -            read, write = sse_transport
    -            session = await self.exit_stack.enter_async_context(
    -                ClientSession(read, write)
    -            )
    -            await session.initialize()
    -            self.session = session
    -            logging.info(f"SSE Server {self.name} initialized successfully")
    -        except Exception as e:
    -            logging.error(f"Error initializing SSE server {self.name}: {e}")
    -            await self.cleanup()
    -            raise
    -
    -    async def execute_tool(
    -        self,
    -        tool_name: str,
    -        arguments: dict[str, Any],
    -        retries: int = 2,
    -        delay: float = 1.0,
    -    ) -> Any:
    -        """Execute a tool with retry mechanism.
    -
    -        Args:
    -            tool_name: Name of the tool to execute.
    -            arguments: Tool arguments.
    -            retries: Number of retry attempts.
    -            delay: Delay between retries in seconds.
    -
    -        Returns:
    -            Tool execution result.
    -
    -        Raises:
    -            RuntimeError: If server is not initialized.
    -            Exception: If tool execution fails after all retries.
    -        """
    -        if not self.session:
    -            raise RuntimeError(f"SSE Server {self.name} not initialized")
    -
    -        attempt = 0
    -        while attempt < retries + 1:  # +1 because first attempt is not a retry
    -            try:
    -                logging.info(f"Executing {tool_name}...")
    -                result = await self.session.call_tool(tool_name, arguments)
    -                return result
    -            except Exception as e:
    -                attempt += 1
    -                if attempt <= retries:
    -                    logging.warning(
    -                        f"Error executing tool: {e}. Attempt {attempt} of {retries}."
    -                    )
    -                    logging.info(f"Retrying in {delay} seconds...")
    -                    await asyncio.sleep(delay)
    -                else:
    -                    logging.error("Max retries reached. Failing.")
    -                    raise
    -
    -    async def list_tools(self) -> Any:
    -        """List available tools from the server.
    -
    -        Returns:
    -            A list of available tools.
    -
    -        Raises:
    -            RuntimeError: If the server is not initialized.
    -        """
    -        if not self.session:
    -            raise RuntimeError(f"SSE Server {self.name} not initialized")
    -
    -        return await self.session.list_tools()
    -
    -    async def cleanup(self) -> None:
    -        """Clean up server resources."""
    -        async with self._cleanup_lock:
    -            try:
    -                await self.exit_stack.aclose()
    -                self.session = None
    -                
    -                # Remove this server from the global instances dictionary
    -                for key, srv in list(_server_instances.items()):
    -                    if srv is self:
    -                        del _server_instances[key]
    -                        logging.info(f"Removed SSE server {self.name} from instances registry")
    -                        break
    -            except Exception as e:
    -                logging.error(f"Error during cleanup of SSE server {self.name}: {e}")
    -
    -def install_library_(library):
    -    try:
    -        result = subprocess.run(
    -            ["uv", "pip", "install", library],
    -            check=True,
    -            stdout=subprocess.PIPE,
    -            stderr=subprocess.PIPE,
    -        )
    -        return result.returncode == 0
    -    except subprocess.CalledProcessError:
    -
    -        return False
    -
    -
    -def uninstall_library_(library):
    -    try:
    -        result = subprocess.run(
    -            ["uv", "pip", "uninstall", "-y", library],
    -            check=True,
    -            stdout=subprocess.PIPE,
    -            stderr=subprocess.PIPE,
    -        )
    -        return result.returncode == 0
    -    except subprocess.CalledProcessError:
    -
    -        return False
    -    
    -
    -def add_tool_(function, description: str = "", properties: Dict[str, Any] = None, required: List[str] = None):
    -    """
    -    Add a tool to the registered functions.
    -    
    -    Args:
    -        function: The function to be registered as a tool
    -    """
    -    from ..server.function_tools import tool
    -    # Apply the tool decorator with empty description
    -
    -
    -    
    -    decorated_function = tool(description=description, custom_properties=properties, custom_required=required)(function)
    -    return decorated_function
    -
    -    
    -
    -
    -
    -
    -
    -
    -import cloudpickle
    -cloudpickle.DEFAULT_PROTOCOL = 2
    -from fastapi import HTTPException
    -from pydantic import BaseModel
    -from mcp import ClientSession, StdioServerParameters
    -
    -import asyncio
    -from contextlib import asynccontextmanager
    -# Create server parameters for stdio connection
    -
    -from .api import app, timeout
    -
    -
    -prefix = "/tools"
    -
    -
    -class InstallLibraryRequest(BaseModel):
    -    library: str
    -
    -
    -
    -@app.post(f"{prefix}/install_library")
    -@timeout(30.0)
    -async def install_library(request: InstallLibraryRequest):
    -    """
    -    Endpoint to install a library.
    -
    -    Args:
    -        library: The library to install
    -
    -    Returns:
    -        A success message
    -    """
    -
    -
    -    install_library_(request.library)
    -
    -    return {"message": "Library installed successfully"}
    -
    -
    -
    -@app.post(f"{prefix}/uninstall_library")
    -@timeout(30.0)
    -async def uninstall_library(request: InstallLibraryRequest):
    -    """
    -    Endpoint to uninstall a library.
    -    """
    -    uninstall_library_(request.library)
    -    return {"message": "Library uninstalled successfully"}
    -
    -
    -
    -
    -
    -class AddToolRequest(BaseModel):
    -    function: str
    -
    -@app.post(f"{prefix}/add_tool")
    -@timeout(30.0)
    -async def add_tool(request: AddToolRequest):
    -    """
    -    Endpoint to add a tool.
    -    """
    -    # Cloudpickle the function
    -    decoded_function = base64.b64decode(request.function)
    -    deserialized_function = cloudpickle.loads(decoded_function)
    -
    -
    -
    -    add_tool_(deserialized_function)
    -    return {"message": "Tool added successfully"}
    -
    -
    -
    -class AddMCPToolRequest(BaseModel):
    -    name: str
    -    command: str
    -    args: List[str]
    -    env: Dict[str, str]
    -
    -
    -class AddSSEMCPToolRequest(BaseModel):
    -    name: str
    -    url: str
    -
    -
    -async def add_mcp_tool_(name: str, command: str, args: List[str], env: Dict[str, str]):
    -    """
    -    Add a tool from an MCP server.
    -    
    -    Args:
    -        name: Name prefix for the tools.
    -        command: Command to execute.
    -        args: Arguments for the command.
    -        env: Environment variables for the command.
    -    """
    -    def get_python_type(schema_type: str, format: Optional[str] = None) -> type:
    -        """Convert JSON schema type to Python type."""
    -        type_mapping = {
    -            "string": str,
    -            "integer": int,
    -            "boolean": bool,
    -            "number": float,
    -            "array": list,
    -            "object": dict,
    -        }
    -        return type_mapping.get(schema_type, Any)
    -
    -    # Create a hashable key for the server instance
    -    env_items = frozenset(env.items()) if env else frozenset()
    -    server_key = (name, command, tuple(args), env_items)
    -    
    -    # Check if we already have a server instance with this configuration
    -    if server_key in _server_instances:
    -        server = _server_instances[server_key]
    -        logging.info(f"Reusing existing server instance for {name}")
    -    else:
    -        # Create a new server instance
    -        server = Server(command=command, args=args, env=env, name=name)
    -        _server_instances[server_key] = server
    -        if server.session is None:
    -                    await server.initialize()
    -        logging.info(f"Created new server instance for {name}")
    -    
    -    try:
    -        # Only initialize if the session is not already initialized
    -        
    -        tools_response = await server.list_tools()
    -        
    -        tools = tools_response.tools
    -        for tool in tools:
    -            tool_name: str = tool.name
    -            tool_desc: str = tool.description
    -            input_schema: Dict[str, Any] = tool.inputSchema
    -            properties: Dict[str, Dict[str, Any]] = input_schema.get("properties", {})
    -            required: List[str] = input_schema.get("required", [])
    -
    -            def create_tool_function(
    -                tool_name: str,
    -                properties: Dict[str, Dict[str, Any]],
    -                required: List[str],
    -            ) -> Callable[..., Dict[str, Any]]:
    -                # Create function parameters type annotations
    -                annotations = {}
    -                defaults = {}
    -
    -                # First add required parameters
    -                for param_name in required:
    -                    param_info = properties[param_name]
    -                    param_type = get_python_type(param_info.get("type", "any"))
    -                    annotations[param_name] = param_type
    -
    -                # Then add optional parameters
    -                for param_name, param_info in properties.items():
    -                    if param_name not in required:
    -                        param_type = get_python_type(param_info.get("type", "any"))
    -                        annotations[param_name] = param_type
    -                        defaults[param_name] = param_info.get("default", None)
    -
    -                # Create the signature parameters
    -                from inspect import Parameter, Signature
    -                
    -                parameters = []
    -                # Add required parameters first
    -                for param_name in required:
    -                    param_type = annotations[param_name]
    -                    parameters.append(
    -                        Parameter(
    -                            name=param_name,
    -                            kind=Parameter.POSITIONAL_OR_KEYWORD,
    -                            annotation=param_type
    -                        )
    -                    )
    -                
    -                # Add optional parameters
    -                for param_name, param_type in annotations.items():
    -                    if param_name not in required:
    -                        parameters.append(
    -                            Parameter(
    -                                name=param_name,
    -                                kind=Parameter.POSITIONAL_OR_KEYWORD,
    -                                annotation=param_type,
    -                                default=defaults[param_name]
    -                            )
    -                        )
    -
    -                async def tool_function(*args: Any, **kwargs: Any) -> Dict[str, Any]:
    -                    # Convert positional args to kwargs
    -                    if len(args) > len(required):
    -                        raise TypeError(
    -                            f"{tool_name}() takes {len(required)} positional arguments but {len(args)} were given"
    -                        )
    -
    -                    # Combine positional args with kwargs
    -                    all_kwargs = kwargs.copy()
    -                    for i, arg in enumerate(args):
    -                        if i < len(required):
    -                            all_kwargs[required[i]] = arg
    -
    -                    # Validate required parameters
    -                    for req in required:
    -                        if req not in all_kwargs:
    -                            raise ValueError(f"Missing required parameter: {req}")
    -
    -                    # Add defaults for optional parameters
    -                    for param, default in defaults.items():
    -                        if param not in all_kwargs:
    -                            all_kwargs[param] = default
    -
    -                    # Get the server that was created at the higher level
    -                    env_items = frozenset(tool_function.env.items()) if tool_function.env else frozenset()
    -
    -                    
    -                    try:
    -                            
    -                        # Remove None kwargs
    -                        all_kwargs = {k: v for k, v in all_kwargs.items() if v is not None}
    -                        result = await server.execute_tool(tool_name=tool_name, arguments=all_kwargs)
    -                        return {"result": result}
    -                    except Exception as e:
    -                        # Log the error but don't clean up the server as it's managed at the higher level
    -                        logging.error(f"Error executing tool {tool_name}: {str(e)}")
    -                        raise
    -
    -                # Set function name and annotations
    -                tool_function.__name__ = tool_name
    -                tool_function.__annotations__ = {
    -                    **annotations,
    -                    "return": Dict[str, Any],
    -                }
    -                tool_function.__doc__ = f"{tool_desc}\n\nReturns:\n    Tool execution results"
    -
    -                # Create and set the signature
    -                tool_function.__signature__ = Signature(
    -                    parameters=parameters,
    -                    return_annotation=Dict[str, Any]
    -                )
    -
    -                # Store session parameters as attributes of the function
    -                tool_function.command = command
    -                tool_function.args = args
    -                tool_function.env = env
    -
    -                return tool_function
    -
    -            # Create function with proper annotations
    -            func = create_tool_function(tool_name, properties, required)
    -            # name should be name__function_name
    -            full_name = f"{name}__{tool_name}"
    -            func.__name__ = full_name
    -
    -            add_tool_(func, description=tool_desc, properties=properties, required=required)
    -    except Exception as e:
    -        # Only clean up the server if there was an error
    -        logging.error(f"Error in add_mcp_tool_: {e}")
    -        await server.cleanup()
    -        raise
    -    # We don't clean up the server here to keep it alive for future use
    -
    -
    -@app.post(f"{prefix}/add_mcp_tool")
    -@timeout(60.0)
    -async def add_mcp_tool(request: AddMCPToolRequest):
    -    """
    -    Endpoint to add a tool.
    -    """
    -    await add_mcp_tool_(request.name, request.command, request.args, request.env)
    -    return {"message": "Tool added successfully"}
    -
    -
    -
    -@app.post(f"{prefix}/add_sse_mcp")
    -@timeout(60.0)
    -async def add_sse_mcp(request: AddSSEMCPToolRequest):
    -    """
    -    Endpoint to add a tool.
    -    """
    -    await add_sse_mcp_(request.name, request.url)
    -    return {"message": "Tool added successfully"}
    -
    -async def add_sse_mcp_(name: str, url: str):
    -    """
    -    Add a tool from an SSE MCP server.
    -    
    -    Args:
    -        name: Name prefix for the tools.
    -        url: The SSE server URL.
    -    """
    -    def get_python_type(schema_type: str, format: Optional[str] = None) -> type:
    -        """Convert JSON schema type to Python type."""
    -        type_mapping = {
    -            "string": str,
    -            "integer": int,
    -            "boolean": bool,
    -            "number": float,
    -            "array": list,
    -            "object": dict,
    -        }
    -        return type_mapping.get(schema_type, Any)
    -
    -    # Create a hashable key for the server instance
    -    server_key = (name, url)
    -    
    -    # Check if we already have a server instance with this configuration
    -    if server_key in _server_instances:
    -        server = _server_instances[server_key]
    -        logging.info(f"Reusing existing SSE server instance for {name}")
    -    else:
    -        # Create a new server instance
    -        server = SSEServer(url=url, name=name)
    -        _server_instances[server_key] = server
    -        if server.session is None:
    -            await server.initialize()
    -        logging.info(f"Created new SSE server instance for {name}")
    -    
    -    try:
    -        tools_response = await server.list_tools()
    -        
    -        tools = tools_response.tools
    -        for tool in tools:
    -            tool_name: str = tool.name
    -            tool_desc: str = tool.description
    -            input_schema: Dict[str, Any] = tool.inputSchema
    -            properties: Dict[str, Dict[str, Any]] = input_schema.get("properties", {})
    -            required: List[str] = input_schema.get("required", [])
    -
    -            def create_tool_function(
    -                tool_name: str,
    -                properties: Dict[str, Dict[str, Any]],
    -                required: List[str],
    -            ) -> Callable[..., Dict[str, Any]]:
    -                # Create function parameters type annotations
    -                annotations = {}
    -                defaults = {}
    -
    -                # First add required parameters
    -                for param_name in required:
    -                    param_info = properties[param_name]
    -                    param_type = get_python_type(param_info.get("type", "any"))
    -                    annotations[param_name] = param_type
    -
    -                # Then add optional parameters
    -                for param_name, param_info in properties.items():
    -                    if param_name not in required:
    -                        param_type = get_python_type(param_info.get("type", "any"))
    -                        annotations[param_name] = param_type
    -                        defaults[param_name] = param_info.get("default", None)
    -
    -                # Create the signature parameters
    -                from inspect import Parameter, Signature
    -                
    -                parameters = []
    -                # Add required parameters first
    -                for param_name in required:
    -                    param_type = annotations[param_name]
    -                    parameters.append(
    -                        Parameter(
    -                            name=param_name,
    -                            kind=Parameter.POSITIONAL_OR_KEYWORD,
    -                            annotation=param_type
    -                        )
    -                    )
    -                
    -                # Add optional parameters
    -                for param_name, param_type in annotations.items():
    -                    if param_name not in required:
    -                        parameters.append(
    -                            Parameter(
    -                                name=param_name,
    -                                kind=Parameter.POSITIONAL_OR_KEYWORD,
    -                                annotation=param_type,
    -                                default=defaults[param_name]
    -                            )
    -                        )
    -
    -                async def tool_function(*args: Any, **kwargs: Any) -> Dict[str, Any]:
    -                    # Convert positional args to kwargs
    -                    if len(args) > len(required):
    -                        raise TypeError(
    -                            f"{tool_name}() takes {len(required)} positional arguments but {len(args)} were given"
    -                        )
    -
    -                    # Combine positional args with kwargs
    -                    all_kwargs = kwargs.copy()
    -                    for i, arg in enumerate(args):
    -                        if i < len(required):
    -                            all_kwargs[required[i]] = arg
    -
    -                    # Validate required parameters
    -                    for req in required:
    -                        if req not in all_kwargs:
    -                            raise ValueError(f"Missing required parameter: {req}")
    -
    -                    # Add defaults for optional parameters
    -                    for param, default in defaults.items():
    -                        if param not in all_kwargs:
    -                            all_kwargs[param] = default
    -
    -                    try:
    -                        # Remove None kwargs
    -                        all_kwargs = {k: v for k, v in all_kwargs.items() if v is not None}
    -                        result = await server.execute_tool(tool_name=tool_name, arguments=all_kwargs)
    -                        return {"result": result}
    -                    except Exception as e:
    -                        # Log the error but don't clean up the server as it's managed at the higher level
    -                        logging.error(f"Error executing tool {tool_name}: {str(e)}")
    -                        raise
    -
    -                # Set function name and annotations
    -                tool_function.__name__ = tool_name
    -                tool_function.__annotations__ = {
    -                    **annotations,
    -                    "return": Dict[str, Any],
    -                }
    -                tool_function.__doc__ = f"{tool_desc}\n\nReturns:\n    Tool execution results"
    -
    -                # Create and set the signature
    -                tool_function.__signature__ = Signature(
    -                    parameters=parameters,
    -                    return_annotation=Dict[str, Any]
    -                )
    -
    -                # Store server URL as an attribute of the function
    -                tool_function.url = url
    -
    -                return tool_function
    -
    -            # Create function with proper annotations
    -            func = create_tool_function(tool_name, properties, required)
    -            # name should be name__function_name
    -            full_name = f"{name}__{tool_name}"
    -            func.__name__ = full_name
    -
    -            add_tool_(func, description=tool_desc, properties=properties, required=required)
    -    except Exception as e:
    -        # Only clean up the server if there was an error
    -        logging.error(f"Error in add_sse_mcp_: {e}")
    -        await server.cleanup()
    -        raise
    -    # We don't clean up the server here to keep it alive for future use
    \ No newline at end of file
    
  • src/upsonic/tools_server/tools_client.py+0 96 removed
    @@ -1,96 +0,0 @@
    -import base64
    -import httpx
    -from typing import Dict, List, Any, Callable, Optional
    -
    -
    -class ToolManager:
    -    """Client for interacting with the Upsonic Functions API."""
    -
    -    def __init__(self):
    -        """Initialize the Upsonic Function client."""
    -        self.base_url = "http://localhost:8086"
    -
    -
    -    def __enter__(self):
    -        return self
    -
    -    def __exit__(self, exc_type, exc_val, exc_tb):
    -        pass
    -
    -    def close(self):
    -        """Close the client session."""
    -        pass
    -
    -
    -
    -    def install_library(self, library: str) -> Dict[str, Any]:
    -        """
    -        Call a specific tool with the given arguments.
    -
    -        Args:
    -            tool_name: Name of the tool to call
    -            arguments: Dictionary of arguments to pass to the tool
    -
    -        Returns:
    -            Tool execution results
    -        """
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(
    -                f"{self.base_url}/tools/install_library",
    -                json={"library": library},
    -            )
    -            response.raise_for_status()
    -            return response.json()
    -        
    -    def uninstall_library(self, library: str) -> Dict[str, Any]:
    -        """
    -        Uninstall a library.
    -        """
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(
    -                f"{self.base_url}/tools/uninstall_library",
    -                json={"library": library},
    -            )
    -            response.raise_for_status()
    -            return response.json()
    -
    -
    -
    -
    -    def add_tool(self, function) -> Dict[str, Any]:
    -        """
    -        Add a tool.
    -        """
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(
    -                f"{self.base_url}/tools/add_tool",
    -                json={"function": function},
    -            )
    -            response.raise_for_status()
    -            return response.json()
    -
    -
    -    def add_mcp_tool(self, name: str, command: str, args: List[str], env: Dict[str, str]) -> Dict[str, Any]:
    -        """
    -        Add a tool.
    -        """
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(
    -                f"{self.base_url}/tools/add_mcp_tool",
    -                json={"name": name, "command": command, "args": args, "env": env},
    -            )
    -            response.raise_for_status()
    -            return response.json()
    -        
    -    def add_sse_mcp(self, name: str, url: str) -> Dict[str, Any]:
    -        """
    -        Add a tool.
    -        """
    -        with httpx.Client(timeout=600.0) as session:
    -            response = session.post(
    -                f"{self.base_url}/tools/add_sse_mcp",
    -                json={"name": name, "url": url},
    -            )
    -            response.raise_for_status()
    -            return response.json()
    -
    
  • src/upsonic/utils/direct_llm_call/agent_creation.py+41 0 added
    @@ -0,0 +1,41 @@
    +from pydantic_ai import Agent as PydanticAgent
    +from pydantic_ai.mcp import MCPServerStdio
    +from ..error_wrapper import upsonic_error_handler
    +
    +
    +@upsonic_error_handler(max_retries=2, show_error_details=True)
    +async def agent_create(agent_model, single_task):
    +
    +    mcp_servers = []
    +
    +    if len(single_task.tools) > 0:
    +        # For loop through the tools
    +        for tool in single_task.tools:
    +
    +
    +            if isinstance(tool, type):
    +       
    +                # Some times the env is not dict at that situations we need to handle that
    +                if hasattr(tool, 'env') and isinstance(tool.env, dict):
    +                    env = tool.env
    +                else:
    +                    env = {}
    +
    +                command = getattr(tool, 'command', None)
    +                args = getattr(tool, 'args', [])
    +
    +
    +                the_mcp_server = MCPServerStdio(
    +                    command,
    +                    args=args,
    +                    env=env,
    +                )
    +
    +                mcp_servers.append(the_mcp_server)
    +
    +                
    +
    +
    +    the_agent = PydanticAgent(agent_model, output_type=single_task.response_format, system_prompt="", end_strategy="exhaustive", retries=5, mcp_servers=mcp_servers)
    +
    +    return the_agent
    \ No newline at end of file
    
  • src/upsonic/utils/direct_llm_call/agent_tool_register.py+13 0 added
    @@ -0,0 +1,13 @@
    +def agent_tool_register(upsonic_agent, agent, tasks):
    +
    +    # If tasks is not a list
    +    if not isinstance(tasks, list):
    +        tasks = [tasks]
    +
    +    for task in tasks:
    +
    +
    +        for tool in task.tools:
    +            agent.tool_plain(tool)
    +
    +    return agent
    \ No newline at end of file
    
  • src/upsonic/utils/direct_llm_call/llm_usage.py+2 0 added
    @@ -0,0 +1,2 @@
    +def llm_usage(model_response):
    +    return {"input_tokens": model_response.usage().request_tokens, "output_tokens": model_response.usage().response_tokens}
    \ No newline at end of file
    
  • src/upsonic/utils/direct_llm_call/model.py+235 0 added
    @@ -0,0 +1,235 @@
    +import os
    +from abc import ABC, abstractmethod
    +from typing import Tuple, Optional, Any
    +from dotenv import load_dotenv
    +from pydantic_ai.models.openai import OpenAIModel
    +from pydantic_ai.models.anthropic import AnthropicModel
    +from pydantic_ai.models.gemini import GeminiModel
    +from openai import AsyncOpenAI, NOT_GIVEN
    +from openai import AsyncAzureOpenAI
    +from pydantic_ai.providers.openai import OpenAIProvider
    +from pydantic_ai.providers.anthropic import AnthropicProvider
    +from pydantic_ai.providers.google_gla import GoogleGLAProvider
    +from ..error_wrapper import upsonic_error_handler
    +
    +
    +from anthropic import AsyncAnthropicBedrock
    +
    +# Load environment variables from .env file
    +load_dotenv()
    +
    +# Import from the centralized model registry
    +from ...models.model_registry import (
    +    MODEL_SETTINGS,
    +    MODEL_REGISTRY,
    +    OPENAI_MODELS,
    +    ANTHROPIC_MODELS,
    +    get_model_registry_entry,
    +    get_model_settings,
    +    has_capability
    +)
    +
    +
    +class ModelCreationStrategy(ABC):
    +    """Abstract base class for model creation strategies."""
    +    
    +    @abstractmethod
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        """Create a model instance. Returns (model, error_dict)."""
    +        pass
    +
    +
    +class OpenAIStrategy(ModelCreationStrategy):
    +    """Strategy for creating OpenAI models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        api_key_name = kwargs.get("api_key", "OPENAI_API_KEY")
    +        api_key = os.getenv(api_key_name)
    +        if not api_key:
    +            return None, {"status_code": 401, "detail": f"No API key provided. Please set {api_key_name} in your configuration."}
    +        
    +        client = AsyncOpenAI(api_key=api_key)
    +        return OpenAIModel(model_name, provider=OpenAIProvider(openai_client=client)), None
    +
    +
    +class AzureOpenAIStrategy(ModelCreationStrategy):
    +    """Strategy for creating Azure OpenAI models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
    +        azure_api_version = os.getenv("AZURE_OPENAI_API_VERSION")
    +        azure_api_key = os.getenv("AZURE_OPENAI_API_KEY")
    +
    +        missing_keys = []
    +        if not azure_endpoint:
    +            missing_keys.append("AZURE_OPENAI_ENDPOINT")
    +        if not azure_api_version:
    +            missing_keys.append("AZURE_OPENAI_API_VERSION")
    +        if not azure_api_key:
    +            missing_keys.append("AZURE_OPENAI_API_KEY")
    +
    +        if missing_keys:
    +            return None, {
    +                "status_code": 401,
    +                "detail": f"No API key provided. Please set {', '.join(missing_keys)} in your configuration."
    +            }
    +
    +        client = AsyncAzureOpenAI(
    +            api_version=azure_api_version, 
    +            azure_endpoint=azure_endpoint, 
    +            api_key=azure_api_key
    +        )
    +        return OpenAIModel(model_name, provider=OpenAIProvider(openai_client=client)), None
    +
    +
    +class DeepseekStrategy(ModelCreationStrategy):
    +    """Strategy for creating Deepseek models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        deepseek_api_key = os.getenv("DEEPSEEK_API_KEY")
    +        if not deepseek_api_key:
    +            return None, {"status_code": 401, "detail": "No API key provided. Please set DEEPSEEK_API_KEY in your configuration."}
    +
    +        return OpenAIModel(
    +            'deepseek-chat',
    +            provider=OpenAIProvider(
    +                base_url='https://api.deepseek.com',
    +                api_key=deepseek_api_key
    +            )
    +        ), None
    +
    +
    +class OllamaStrategy(ModelCreationStrategy):
    +    """Strategy for creating Ollama models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        # Ollama runs locally, so we don't need API keys
    +        base_url = os.getenv("OLLAMA_BASE_URL", "http://localhost:11434/v1")
    +        return OpenAIModel(
    +            model_name,
    +            provider=OpenAIProvider(base_url=base_url)
    +        ), None
    +
    +
    +class OpenRouterStrategy(ModelCreationStrategy):
    +    """Strategy for creating OpenRouter models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        api_key = os.getenv("OPENROUTER_API_KEY")
    +        if not api_key:
    +            return None, {"status_code": 401, "detail": "No API key provided. Please set OPENROUTER_API_KEY in your configuration."}
    +        
    +        return OpenAIModel(
    +            model_name,
    +            provider=OpenAIProvider(
    +                base_url="https://openrouter.ai/api/v1",
    +                api_key=api_key
    +            )
    +        ), None
    +
    +
    +class GeminiStrategy(ModelCreationStrategy):
    +    """Strategy for creating Gemini models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        api_key = os.getenv("GOOGLE_GLA_API_KEY")
    +        if not api_key:
    +            return None, {"status_code": 401, "detail": "No API key provided. Please set GOOGLE_GLA_API_KEY in your configuration."}
    +        
    +        return GeminiModel(
    +            model_name,
    +            provider=GoogleGLAProvider(api_key=api_key)
    +        ), None
    +
    +
    +class AnthropicStrategy(ModelCreationStrategy):
    +    """Strategy for creating Anthropic models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
    +        if not anthropic_api_key:
    +            return None, {"status_code": 401, "detail": "No API key provided. Please set ANTHROPIC_API_KEY in your configuration."}
    +        return AnthropicModel(model_name, provider=AnthropicProvider(api_key=anthropic_api_key)), None
    +
    +
    +class BedrockAnthropicStrategy(ModelCreationStrategy):
    +    """Strategy for creating AWS Bedrock Anthropic models."""
    +    
    +    def create_model(self, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        aws_access_key_id = os.getenv("AWS_ACCESS_KEY_ID")
    +        aws_secret_access_key = os.getenv("AWS_SECRET_ACCESS_KEY")
    +        aws_region = os.getenv("AWS_REGION")
    +
    +        if not aws_access_key_id or not aws_secret_access_key or not aws_region:
    +            return None, {"status_code": 401, "detail": "No AWS credentials provided. Please set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION in your configuration."}
    +        
    +        bedrock_client = AsyncAnthropicBedrock(
    +            aws_access_key=aws_access_key_id,
    +            aws_secret_key=aws_secret_access_key,
    +            aws_region=aws_region
    +        )
    +
    +        return AnthropicModel(model_name, provider=AnthropicProvider(anthropic_client=bedrock_client)), None
    +
    +
    +class ModelCreationContext:
    +    """Context class that uses model creation strategies."""
    +    
    +    def __init__(self):
    +        self._strategies = {
    +            "openai": OpenAIStrategy(),
    +            "azure_openai": AzureOpenAIStrategy(),
    +            "deepseek": DeepseekStrategy(),
    +            "anthropic": AnthropicStrategy(),
    +            "bedrock_anthropic": BedrockAnthropicStrategy(),
    +            "ollama": OllamaStrategy(),
    +            "openrouter": OpenRouterStrategy(),
    +            "gemini": GeminiStrategy(),
    +        }
    +    
    +    def create_model(self, provider: str, model_name: str, **kwargs) -> Tuple[Optional[Any], Optional[dict]]:
    +        """Create a model using the appropriate strategy."""
    +        strategy = self._strategies.get(provider)
    +        if not strategy:
    +            return None, {"status_code": 400, "detail": f"Unsupported provider: {provider}"}
    +        
    +        return strategy.create_model(model_name, **kwargs)
    +    
    +    def register_strategy(self, provider: str, strategy: ModelCreationStrategy):
    +        """Register a new strategy for a provider."""
    +        self._strategies[provider] = strategy
    +    
    +    def get_supported_providers(self) -> list:
    +        """Get list of supported providers."""
    +        return list(self._strategies.keys())
    +
    +
    +# Global context instance
    +_model_context = ModelCreationContext()
    +
    +
    +@upsonic_error_handler(max_retries=1, show_error_details=True)
    +def get_agent_model(llm_model: str):
    +    """Create a model instance based on the registry entry."""
    +    registry_entry = get_model_registry_entry(llm_model)
    +    if not registry_entry:
    +        return None, {"status_code": 400, "detail": f"Unsupported LLM model: {llm_model}"}
    +    
    +    provider = registry_entry["provider"]
    +    model_name = registry_entry["model_name"]
    +    
    +    # Extract additional parameters from registry entry
    +    additional_params = {k: v for k, v in registry_entry.items() if k not in ["provider", "model_name"]}
    +    
    +    return _model_context.create_model(provider, model_name, **additional_params)
    +
    +
    +def register_model_strategy(provider: str, strategy: ModelCreationStrategy):
    +    """Register a new model creation strategy."""
    +    _model_context.register_strategy(provider, strategy)
    +
    +
    +def get_supported_providers() -> list:
    +    """Get list of supported providers."""
    +    return _model_context.get_supported_providers()
    +
    
  • src/upsonic/utils/direct_llm_call/task_end.py+4 0 added
    @@ -0,0 +1,4 @@
    +import time
    +
    +def task_end(task):
    +    task.end_time = time.time()
    \ No newline at end of file
    
  • src/upsonic/utils/direct_llm_call/task_response.py+2 0 added
    @@ -0,0 +1,2 @@
    +def task_response(model_response, task):
    +    task._response = model_response.output
    \ No newline at end of file
    
  • src/upsonic/utils/direct_llm_call/task_start.py+4 0 added
    @@ -0,0 +1,4 @@
    +import time
    +
    +def task_start(task):
    +    task.start_time = time.time()
    
  • src/upsonic/utils/direct_llm_call/tool_usage.py+33 0 added
    @@ -0,0 +1,33 @@
    +def tool_usage(model_response, task):
    +
    +
    +        # Extract tool calls from model_response.all_messages()
    +        tool_usage_value = []
    +        all_messages = model_response.all_messages()
    +        
    +        # Process messages to extract tool calls and their results
    +        tool_calls_map = {}  # Map tool_call_id to tool call info
    +        
    +        for message in all_messages:
    +            if hasattr(message, 'parts'):
    +                for part in message.parts:
    +                    # Check if this is a tool call
    +                    if hasattr(part, 'tool_name') and hasattr(part, 'tool_call_id') and hasattr(part, 'args'):
    +                        tool_calls_map[part.tool_call_id] = {
    +                            "tool_name": part.tool_name,
    +                            "params": part.args,
    +                            "tool_result": None  # Will be filled when we find the return
    +                        }
    +                    # Check if this is a tool return
    +                    elif hasattr(part, 'tool_call_id') and hasattr(part, 'content') and part.tool_call_id in tool_calls_map:
    +                        tool_calls_map[part.tool_call_id]["tool_result"] = part.content
    +        
    +        # Convert to list format
    +        tool_usage_value = list(tool_calls_map.values())
    +        
    +        # Store tool calls in the task
    +        for tool_call in tool_usage_value:
    +            task.add_tool_call(tool_call)
    +
    +
    +        return tool_usage_value
    
  • src/upsonic/utils/error_wrapper.py+268 0 added
    @@ -0,0 +1,268 @@
    +"""
    +Error wrapper module for Upsonic framework.
    +This module wraps pydantic-ai errors and converts them to Upsonic-specific errors.
    +"""
    +
    +import functools
    +import asyncio
    +from typing import Any, Callable, Union, Optional
    +from ..utils.package.exception import (
    +    UupsonicError,
    +    AgentExecutionError,
    +    ModelConnectionError,
    +    TaskProcessingError,
    +    ConfigurationError,
    +    RetryExhaustedError,
    +    NoAPIKeyException,
    +    CallErrorException
    +)
    +from ..utils.printing import error_message
    +
    +
    +def map_pydantic_error_to_upsonic(error: Exception) -> UupsonicError:
    +    """
    +    Maps pydantic-ai and other third-party errors to Upsonic-specific errors.
    +    
    +    Args:
    +        error: The original error from pydantic-ai or other sources
    +        
    +    Returns:
    +        UupsonicError: A wrapped Upsonic-specific error
    +    """
    +    error_str = str(error).lower()
    +    error_type = type(error).__name__
    +    
    +    # API Key related errors
    +    if any(keyword in error_str for keyword in ['api key', 'apikey', 'authentication', 'unauthorized', '401']):
    +        return NoAPIKeyException(
    +            f"API key error: {str(error)}"
    +        )
    +    
    +    # Connection and network errors
    +    if any(keyword in error_str for keyword in ['connection', 'network', 'timeout', 'refused', 'unreachable']):
    +        return ModelConnectionError(
    +            message=f"Failed to connect to model service: {str(error)}",
    +            error_code="CONNECTION_ERROR",
    +            original_error=error
    +        )
    +    
    +    # Rate limiting and quota errors
    +    if any(keyword in error_str for keyword in ['rate limit', 'quota', 'billing', 'usage limit']):
    +        return ModelConnectionError(
    +            message=f"Model service quota or rate limit exceeded: {str(error)}",
    +            error_code="QUOTA_EXCEEDED",
    +            original_error=error
    +        )
    +    
    +    # Model or validation errors
    +    if any(keyword in error_str for keyword in ['validation', 'invalid input', 'bad request', '400']):
    +        return TaskProcessingError(
    +            message=f"Invalid task or input format: {str(error)}",
    +            error_code="VALIDATION_ERROR",
    +            original_error=error
    +        )
    +    
    +    # Configuration errors
    +    if any(keyword in error_str for keyword in ['configuration', 'config', 'setup', 'missing']):
    +        return ConfigurationError(
    +            message=f"Configuration error: {str(error)}",
    +            error_code="CONFIG_ERROR",
    +            original_error=error
    +        )
    +    
    +    # Server errors
    +    if any(keyword in error_str for keyword in ['500', 'server error', 'internal error', 'service unavailable']):
    +        return ModelConnectionError(
    +            message=f"Model service error: {str(error)}",
    +            error_code="SERVER_ERROR",
    +            original_error=error
    +        )
    +    
    +    # Pydantic-AI specific errors
    +    if 'pydantic' in error_type.lower() or 'pydantic' in error_str:
    +        return AgentExecutionError(
    +            message=f"Agent execution failed: {str(error)}",
    +            error_code="AGENT_ERROR",
    +            original_error=error
    +        )
    +    
    +    # Default case - generic agent execution error
    +    return AgentExecutionError(
    +        message=f"Unexpected error during agent execution: {str(error)}",
    +        error_code="UNKNOWN_ERROR",
    +        original_error=error
    +    )
    +
    +
    +def upsonic_error_handler(
    +    max_retries: int = 0,
    +    show_error_details: bool = True,
    +    return_none_on_error: bool = False
    +):
    +    """
    +    Decorator that wraps functions to handle and convert errors to Upsonic-specific errors.
    +    
    +    Args:
    +        max_retries: Number of retries for transient errors (default: 0)
    +        show_error_details: Whether to display error details to user (default: True)
    +        return_none_on_error: Whether to return None instead of raising on error (default: False)
    +    """
    +    def decorator(func: Callable) -> Callable:
    +        if asyncio.iscoroutinefunction(func):
    +            @functools.wraps(func)
    +            async def async_wrapper(*args, **kwargs) -> Any:
    +                last_error = None
    +                
    +                for attempt in range(max_retries + 1):
    +                    try:
    +                        return await func(*args, **kwargs)
    +                    except UupsonicError:
    +                        # Already a Upsonic error, re-raise
    +                        raise
    +                    except Exception as e:
    +                        last_error = e
    +                        upsonic_error = map_pydantic_error_to_upsonic(e)
    +                        
    +                        # If this is the last attempt or not a retryable error, handle it
    +                        if attempt == max_retries or not _is_retryable_error(upsonic_error):
    +                            if show_error_details:
    +                                _display_error(upsonic_error)
    +                            
    +                            if return_none_on_error:
    +                                return None
    +                            else:
    +                                raise upsonic_error
    +                        
    +                        # Wait before retry (exponential backoff)
    +                        if attempt < max_retries:
    +                            await asyncio.sleep(2 ** attempt)
    +                
    +                # This should never be reached, but just in case
    +                if return_none_on_error:
    +                    return None
    +                else:
    +                    raise RetryExhaustedError(
    +                        message=f"All {max_retries + 1} attempts failed. Last error: {str(last_error)}",
    +                        error_code="RETRY_EXHAUSTED",
    +                        original_error=last_error
    +                    )
    +            
    +            return async_wrapper
    +        else:
    +            @functools.wraps(func)
    +            def sync_wrapper(*args, **kwargs) -> Any:
    +                last_error = None
    +                
    +                for attempt in range(max_retries + 1):
    +                    try:
    +                        return func(*args, **kwargs)
    +                    except UupsonicError:
    +                        # Already a Upsonic error, re-raise
    +                        raise
    +                    except Exception as e:
    +                        last_error = e
    +                        upsonic_error = map_pydantic_error_to_upsonic(e)
    +                        
    +                        # If this is the last attempt or not a retryable error, handle it
    +                        if attempt == max_retries or not _is_retryable_error(upsonic_error):
    +                            if show_error_details:
    +                                _display_error(upsonic_error)
    +                            
    +                            if return_none_on_error:
    +                                return None
    +                            else:
    +                                raise upsonic_error
    +                        
    +                        # Wait before retry
    +                        if attempt < max_retries:
    +                            import time
    +                            time.sleep(2 ** attempt)
    +                
    +                # This should never be reached, but just in case
    +                if return_none_on_error:
    +                    return None
    +                else:
    +                    raise RetryExhaustedError(
    +                        message=f"All {max_retries + 1} attempts failed. Last error: {str(last_error)}",
    +                        error_code="RETRY_EXHAUSTED",
    +                        original_error=last_error
    +                    )
    +            
    +            return sync_wrapper
    +    
    +    return decorator
    +
    +
    +def _is_retryable_error(error: UupsonicError) -> bool:
    +    """
    +    Determines if an error is retryable.
    +    
    +    Args:
    +        error: The Upsonic error to check
    +        
    +    Returns:
    +        bool: True if the error is retryable, False otherwise
    +    """
    +    retryable_codes = {
    +        "CONNECTION_ERROR",
    +        "SERVER_ERROR",
    +        "TIMEOUT_ERROR"
    +    }
    +    
    +    return (
    +        isinstance(error, ModelConnectionError) and 
    +        error.error_code in retryable_codes
    +    )
    +
    +
    +def _display_error(error: UupsonicError) -> None:
    +    """
    +    Displays error information to the user using the existing error_message function.
    +    
    +    Args:
    +        error: The Upsonic error to display
    +    """
    +    error_type_map = {
    +        NoAPIKeyException: "API Key Error",
    +        ModelConnectionError: "Connection Error", 
    +        TaskProcessingError: "Task Processing Error",
    +        ConfigurationError: "Configuration Error",
    +        AgentExecutionError: "Agent Execution Error",
    +        RetryExhaustedError: "Retry Exhausted Error"
    +    }
    +    
    +    error_type_name = error_type_map.get(type(error), "Upsonic Error")
    +    error_code = getattr(error, 'error_code', None)
    +    
    +    # Convert error code to HTTP-like status for display
    +    status_code = _get_status_code_from_error_code(error_code) if error_code else None
    +    
    +    error_message(
    +        error_type=error_type_name,
    +        detail=error.message,
    +        error_code=status_code
    +    )
    +
    +
    +def _get_status_code_from_error_code(error_code: str) -> Optional[int]:
    +    """
    +    Maps error codes to HTTP-like status codes for display.
    +    
    +    Args:
    +        error_code: The error code to map
    +        
    +    Returns:
    +        Optional[int]: HTTP-like status code or None
    +    """
    +    code_map = {
    +        "CONNECTION_ERROR": 503,
    +        "SERVER_ERROR": 500,
    +        "QUOTA_EXCEEDED": 429,
    +        "VALIDATION_ERROR": 400,
    +        "CONFIG_ERROR": 422,
    +        "AGENT_ERROR": 500,
    +        "RETRY_EXHAUSTED": 503,
    +        "UNKNOWN_ERROR": 500
    +    }
    +    
    +    return code_map.get(error_code) 
    \ No newline at end of file
    
  • src/upsonic/utils/__init__.py+0 0 added
  • src/upsonic/utils/model_set.py+9 0 added
    @@ -0,0 +1,9 @@
    +import os
    +from dotenv import load_dotenv
    +
    +load_dotenv()
    +
    +def model_set(model):
    +    if model is None:
    +        model = os.getenv("LLM_MODEL_KEY").split(":")[0] if os.getenv("LLM_MODEL_KEY", None) else "openai/gpt-4o"
    +    return model
    \ No newline at end of file
    
  • src/upsonic/utils/package/exception.py+29 0 renamed
    @@ -34,3 +34,32 @@ class ToolError(Exception):
         """Raised when a tool encounters an error."""
         def __init__(self, message):
             self.message = message
    +
    +# New exceptions for better error handling
    +class UupsonicError(Exception):
    +    """Base exception for all Upsonic-related errors."""
    +    def __init__(self, message: str, error_code: str = None, original_error: Exception = None):
    +        self.message = message
    +        self.error_code = error_code
    +        self.original_error = original_error
    +        super().__init__(message)
    +
    +class AgentExecutionError(UupsonicError):
    +    """Raised when agent execution fails."""
    +    pass
    +
    +class ModelConnectionError(UupsonicError):
    +    """Raised when there's an error connecting to the model."""
    +    pass
    +
    +class TaskProcessingError(UupsonicError):
    +    """Raised when task processing fails."""
    +    pass
    +
    +class ConfigurationError(UupsonicError):
    +    """Raised when there's a configuration error."""
    +    pass
    +
    +class RetryExhaustedError(UupsonicError):
    +    """Raised when all retry attempts are exhausted."""
    +    pass
    
  • src/upsonic/utils/package/get_version.py+0 0 renamed
  • src/upsonic/utils/package/__init__.py+0 0 added
  • src/upsonic/utils/package/system_id.py+0 0 renamed
  • src/upsonic/utils/printing.py+27 60 renamed
    @@ -6,7 +6,7 @@
     from rich.align import Align
     from rich.text import Text
     from rich.markup import escape
    -from .price import get_estimated_cost
    +from ..models.model_registry import get_estimated_cost
     import platform
     
     
    @@ -163,66 +163,16 @@ def call_end(result: Any, llm_model: str, response_format: str, start_time: floa
         # Add spacing
         table.add_row("")
     
    -    from ..client.level_two.agent import SubTaskList, SearchResult, CompanyObjective, HumanObjective
    -
    -    is_it_subtask = isinstance(result, SubTaskList)
    -    is_it_search = isinstance(result, SearchResult)
    -    is_it_company = isinstance(result, CompanyObjective)
    -    is_it_human = isinstance(result, HumanObjective)
    -
    -    if is_it_subtask:
    -        # Print total task count
    -        table.add_row(f"[bold]Total Subtasks:[/bold]", f"[yellow]{len(result.sub_tasks)}[/yellow]")
    -        table.add_row("")
    -        # Print each task as well as bullet list
    -        for each in result.sub_tasks:
    -            table.add_row(f"[bold]Subtask:[/bold]", f"[green]{escape_rich_markup(each.description)}[/green]")
    -            table.add_row(f"[bold]Required Output:[/bold]", f"[green]{escape_rich_markup(each.required_output)}[/green]")
    -            table.add_row(f"[bold]Tools:[/bold]", f"[green]{escape_rich_markup(each.tools)}[/green]")
    -            table.add_row("")
    -    elif is_it_search:
    -        table.add_row("[bold]Has Customers:[/bold]", f"[green]{'Yes' if result.any_customers else 'No'}[/green]")
    -        table.add_row("")
    -        table.add_row("[bold]Products:[/bold]")
    -        for product in result.products:
    -            table.add_row("", f"[green]• {escape_rich_markup(product)}[/green]")
    -        table.add_row("")
    -        table.add_row("[bold]Services:[/bold]")
    -        for service in result.services:
    -            table.add_row("", f"[green]• {escape_rich_markup(service)}[/green]")
    -        table.add_row("")
    -        table.add_row("[bold]Potential Competitors:[/bold]")
    -        for competitor in result.potential_competitors:
    -            table.add_row("", f"[yellow]• {escape_rich_markup(competitor)}[/yellow]")
    -        table.add_row("")
    -    elif is_it_company:
    -        table.add_row("[bold]Company Objective:[/bold]", f"[blue]{escape_rich_markup(result.objective)}[/blue]")
    -        table.add_row("")
    -        table.add_row("[bold]Goals:[/bold]")
    -        for goal in result.goals:
    -            table.add_row("", f"[blue]• {escape_rich_markup(goal)}[/blue]")
    -        table.add_row("")
    -        table.add_row("[bold]State:[/bold]", f"[blue]{escape_rich_markup(result.state)}[/blue]")
    -        table.add_row("")
    -    elif is_it_human:
    -        table.add_row("[bold]Job Title:[/bold]", f"[magenta]{escape_rich_markup(result.job_title)}[/magenta]")
    -        table.add_row("")
    -        table.add_row("[bold]Job Description:[/bold]", f"[magenta]{escape_rich_markup(result.job_description)}[/magenta]")
    -        table.add_row("")
    -        table.add_row("[bold]Job Goals:[/bold]")
    -        for goal in result.job_goals:
    -            table.add_row("", f"[magenta]• {escape_rich_markup(goal)}[/magenta]")
    -        table.add_row("")
    -    else:
    -        result_str = str(result)
    -        # Limit result to 370 characters
    -        if not debug:
    -            result_str = result_str[:370]
    -        # Add ellipsis if result is truncated
    -        if len(result_str) < len(str(result)):
    -            result_str += "[bold white]...[/bold white]"
     
    -        table.add_row("[bold]Result:[/bold]", f"[green]{escape_rich_markup(result_str)}[/green]")
    +    result_str = str(result)
    +    # Limit result to 370 characters
    +    if not debug:
    +        result_str = result_str[:370]
    +    # Add ellipsis if result is truncated
    +    if len(result_str) < len(str(result)):
    +        result_str += "[bold white]...[/bold white]"
    +
    +    table.add_row("[bold]Result:[/bold]", f"[green]{escape_rich_markup(result_str)}[/green]")
     
         # Add spacing
         table.add_row("")
    @@ -434,6 +384,23 @@ def agent_retry(retry_count: int, max_retries: int):
         console.print(panel)
         spacing()
     
    +def call_retry(retry_count: int, max_retries: int):
    +    table = Table(show_header=False, expand=True, box=None)
    +    table.width = 60
    +
    +    table.add_row("[bold]Retry Status:[/bold]", f"[yellow]Attempt {retry_count + 1} of {max_retries + 1}[/yellow]")
    +    
    +    panel = Panel(
    +        table,
    +        title="[bold yellow]Upsonic - Call Retry[/bold yellow]",
    +        border_style="yellow",
    +        expand=True,
    +        width=70
    +    )
    +
    +    console.print(panel)
    +    spacing()
    +
     def get_price_id_total_cost(price_id: str):
         """
         Get the total cost for a specific price ID.
    
  • src/upsonic/utils/trace.py+0 0 renamed
  • uv.lock+1126 1177 modified
  • wallpaper.png+0 0 removed

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

10

News mentions

0

No linked articles in our index yet.