Of course, the LLM is deciding which tooling to use and what params to pass, etc. So there is still some room for it to mess up. But it is better than the LLM alone.
Of course, the LLM is deciding which tooling to use and what params to pass, etc. So there is still some room for it to mess up. But it is better than the LLM alone.
I meant that your comments about whether you can trust the numbers coming from the LLM and things like that. Those numbers are coming from the analytics MCP Server tooling not from the LLM. I think the tooling makes it more trustworthy as it is not just statistically more frequently the answer.
It seems like maybe your comments are more focused on the LLMs ability to do these things on its own rather than the tooling providing the analytics based results. Thanks for sharing though, very cool!
Notes from Knox-ville - Apache Knox providing authentication and secure access tokens for Iceberg Tables through the Iceberg REST Catalog API. blog.cloudera.com/secure-data-...
What in the Bay Doors, HAL! Come on, people!
Notes from Knox-ville - Apache Knox leveraged as an istio external authorizer in Cloudera AI Inference Service architecture! blog.cloudera.com/cloudera-ai-...
along with first bluesky typo, FTW!
Not sure why I thought of this as my first blusky post but here it is.