Ask HN: How are you securing LLM code agents?

3 points | by woz_ 9 hours ago

5 comments

  • wanglet33 5 minutes ago
    The sandbox approach mentioned by @arty_prof is essential, but there’s also the 'Data Leakage' side of the coin. If an LLM agent has access to your local filesystem to 'help' with code, it essentially has a map of your credentials. Aside from Dockerizing everything, are people using localized, air-gapped LLMs for sensitive security logic to prevent the 'Phone Home' risk entirely? Curious if anyone has successfully integrated something like Ollama into their dev-flow for this specific reason.
  • arty_prof 6 hours ago
    Best thing you can do is sandbox them, always check what they want to change with config file (package.json).

    Restrict any db operations, for example restrict to run migrations with prisma orm.

    Also restrict access to .env or your project configuration with credentials even for dev environment.

  • wnsdy95 9 hours ago
    What do you mean by securing? Does this mean strictly control AI Agent to behave safely? Or make your data not to get exposed by chatting or so?
  • qasim157 6 hours ago
    [dead]
  • maxbeech 5 hours ago
    [dead]