Little bit of everything!

Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )

Gaming (Mass Effect, Witcher, and too much Satisfactory)

Sci-fi

I live for 90s TV sitcoms

  • 17 Posts
  • 420 Comments
Joined 3 years ago
cake
Cake day: June 2nd, 2023

help-circle
  • I do selfhost my own, and even tried my hand at building something like this myself. It runs pretty well, I’m able to have it integrate with HomeAssistant and kubectl. It can be done with consumer GPUs, I have a 4000 and it runs fine. You don’t get as much context, but it’s about minimizing what the LLM needs to know while calling agents. You have one LLM context that’s running a todo list, you start a new one that is charge of step 1, which spins off more contexts for each subtask, etc. It’s not that each agent needs it’s own GPU, it’s that each agent needs it’s own context.


  • I am and do, I have no qualms with AI if I host it myself. I let it have read access to some things, I have one that is hooked up to my HomeAssistant that can do things like enable lighting or turn on devices. It’s all gated, I control what items I expose and what I don’t. I personally don’t want it reading my emails, but since I host it it’s really not a big deal at all. I have one that gets the status of my servers, reads the metrics, and reports to me in the morning if there were any anomalies.

    I’m really sick of the “AI is just bad because AI is bad”. It can be incredibly useful - IF you know it’s limitations and understand what is wrong with it. I don’t like corporate AI at scale for moral reasons, but running it at home has been incredibly helpful. I don’t trust it to do whatever it wants, that would be insane. I do however let it have read permissions (and I know you keep harping on it, but MCP servers and APIs also have permission structures, even if it did attempt to write something, my other services would block it and it’d be reported) on services to help me sort through piles of information that I cannot manage by myself. When I do allow write access it’s when I’m working directly with it, and I hit a button each time it attempts to write. Think spinning up or down containers on my cluster while I am testing, or collecting info from the internet.

    AI, LLMs, Agentic AI is a tool. It is not the hype every AI bro thinks it is, but it is another tool in the toolbelt. To completely ignore it is on par with ignoring Photoshop when it came out, or Wysiwyg editors when they came designing UIs.











  • Sure! I use Kaniko (Although I see now that it’s not maintained anymore). I’ll probably pull the image in locally to protect it…

    Kaniko does the Docker in Docker, and I found an action that I use, but it looks like that was taken down… Luckily I archived it! Make an action in Forgejo (I have an infrastructure group that I add public repos to for actions. So this one is called action-koniko-build and all it has is this action.yml file in it:

    name: Kaniko
    description: Build a container image using Kaniko
    inputs:
      Dockerfile:
        description: The Dockerfile to pass to Kaniko
        required: true
      image:
        description: Name and tag under which to upload the image
        required: true
      registry:
        description: Domain of the registry. Should be the same as the first path component of the tag.
        required: true
      username:
        description: Username for the container registry
        required: true
      password:
        description: Password for the container registry
        required: true
      context:
        description: Workspace for the build
        required: true
    runs:
      using: docker
      image: docker://gcr.io/kaniko-project/executor:debug
      entrypoint: /bin/sh
      args:
        - -c
        - |
          mkdir -p /kaniko/.docker
          echo '{"auths":{"${{ inputs.registry }}":{"auth":"'$(printf "%s:%s" "${{ inputs.username }}" "${{ inputs.password }}" | base64 | tr -d '\n')'"}}}' > /kaniko/.docker/config.json
          echo Config file follows!
          cat /kaniko/.docker/config.json
          /kaniko/executor --insecure --dockerfile ${{ inputs.Dockerfile }} --destination ${{ inputs.image }} --context dir://${{ inputs.context }}     
    

    Then, you can use it directly like:

    name: Build and Deploy Docker Image
    
    on:
      push:
        branches:
          - main
      workflow_dispatch:
    
    jobs:
      build:
        runs-on: docker
    
        steps:
        # Checkout the repository
        - name: Checkout code
          uses: actions/checkout@v3
    
        - name: Get current date # This is just how I label my containers, do whatever you prefer
          id: date
          run: echo "::set-output name=date::$(date '+%Y%m%d-%H%M')"
    
        - uses:  path.to.your.forgejo.instance:port/infrastructure/action-koniko-build@main # This is what I said above, it references your infrastructure action, on the main branch
          with:
            Dockerfile: cluster/charts/auth/operator/Dockerfile
            image: path.to.your.forgejo.instance:port/group/repo:${{ steps.date.outputs.date }}
            registry: path.to.your.forgejo.instance:port/v1
            username: ${{ env.GITHUB_ACTOR }}
            password: ${{ secrets.RUNNER_TOKEN }} # I haven't found a good secret option that works well, I should see if they have fixed the built-in token
            context: ${{ env.GITHUB_WORKSPACE }}
    

    I run my runners in Kubernetes in the same cluster as my forgejo instance, so this all hooks up pretty easy. Lmk if you want to see that at all if it’s relevant. The big thing is that you’ll need to have them be Privileged, and there’s some complicated stuff where you need to run both the runner and the “dind” container together.