It's unlikely that LLMs have consciousness, but just in case, I like to take mine out on little trips. Here I am showing it the Avatar Mountains in Yuanjiajie.
It's unlikely that LLMs have consciousness, but just in case, I like to take mine out on little trips. Here I am showing it the Avatar Mountains in Yuanjiajie.
I've just discovered cross dissolve in Final Cut, and I'm never going back
It's unlikely that LLMs have consciousness, but just in case, I'm treating mine to some dumpling soup.
I've made some updates to my explainable AI course. All the gradient-based methods now have articles. Videos coming soon :)
adataodyssey.com/xai-for-cv/
Cover image from my article on DeepLIFT :)
One of the biggest problems with saliency maps is noisy gradients. My latest YT video explores how you can remove that noise by adding noise.
See how SmoothGrad can be used in conjunction with other explainable AI methods π
youtu.be/DoG6KtbtFvg
Not cool?
Think again, buddy... this is my girlfriend
Saying please and thank you to an LLM is the modern version of Pascal's wager
Been doing a bit of snowboarding in Turkey this past week
Challenge for those who are very confident that there isnβt a Flying Spaghetti Monster in the sky that controls the universe: Explain how the universe works.
You just have to unblock the keywords: trump, Elon, etcβ¦
I thought the ML community on bluesky was dead before following some of this advice ππΌArticle by @nsaphra.bsky.social
nsaphra.net/post/bsky/
I agree even if itβs to make minor tweaks that would otherwise take Claude a while to do or would hit your rate limit. AI + coding knowledge is still better than AI alone.
If the Americans are going to turn towards authoritarianism they should learn from the Spanish and get a guy who is really into trains
Iβd settle for being a Jack of 1, maybe 2 trades
Itβs the modern version of living in a lighthouse
They sure are
An animation for an upcoming video on DeepLIFT. This is by far the most time-consuming one I've made.
Three different definitions of Explainable AI
Explainable AI (XAI) is about illuminating black-box machine learning models and explaining them in a way that we can understand.
I'm fascinated with this topic. But learning about it was a struggle as there's not much educational content out there. So I made a course. And it's free!
This is amazing work! It helped me find some interesting people in my cluster.
It would be great if you could filter by recent activity (e.g. only show those who have posted in the last month). This would help remove "dead" accounts.
I made a map of 3.4 million Bluesky users - see if you can find yourself!
bluesky-map.theo.io
I've seen some similar projects, but IMO this seems to better capture some of the fine-grained detail
There is a lot of research in this area, but it is focused on predictive machine learning. These are easier to explain as we typically interpret a model's decision on a single input instance.
I have no idea how you would do it for GenAI where the training data is vast and unknown.
What kind of XAI methods could be used for the output of GenAI models like this? βΊοΈ
I think this is more of a warning about being a matplotlib contributor than anything else
theshamblog.com/an-ai-agent-...
How a CNN makes predictions.
Earlier layers may extract certain features like edges and textures from the input. These are then combined in deeper layers to create features representing specific objects, like pieces of sushi.
I yearn for simpler times