Trending

#LinkLog

Latest posts tagged with #LinkLog on Bluesky

Latest Top
Trending

Posts tagged #LinkLog

Preview
Journopoclypse! Yeah, na. I don't think so

a core thing this gets right: scarcity of positional goods is a conserved quantity, and time to compete for them is always finite. I think everything else -- the role of taste, understandable summaries -- follows #linklog

9 0 1 0
Captcha Check

i think "always bet on text" is a nice engineering principle but exactly backwards economically when video is cheap. people seem to prefer better simulacra of "getting it from another person", even if it's lossier, especially when there are machines to deal with the text layer #linklog

2 0 2 0

"The instantaneous jump to commercialization molds a novel idea in the image of finance." A Galbraithian take here would be that finance is, for planning purposes, increasingly state-like in its ability to control uncertainty. How far can that stretch without also controlling violence? #linklog

9 1 1 0
Original post on zylstra.org

Bookmarked Vimeo Lays Off ‘Most’ of Its Staff, Allegedly Includes ‘the Entire Video Team’ (by Gizmodo) Vimeo was bought by the infamous Italian Bending Spoons last year (who previously bought Evernote, Meetup, Wetransfer, Eventbrite). For years Vimeo was a very usable video platform away from […]

0 0 0 0
TMF Associates blog » SpaceX’s Rorschach test

"It’s also undeniable that SpaceX needs more than just Starlink to justify a $1.5T valuation, given that even its expected lead investment bank, Morgan Stanley, only thinks Starlink revenues will get to $126B in 2040." -- this is the sharpest thing i've read about the SpaceX IPO yet #linklog

78 12 2 1
Preview
The Problem of Vanishing AI and K-12 Education The benefits attributed to AI may not be due to AI. What do we expect to gain by encouraging students and educators to use it?

"since no one had a clear sense of the problems they wanted to solve with AI or the efficiencies they wanted AI to create, the supposed benefits of AI were a mirage" -- the pixie dust mentality seems endemic, as does 'ignore the fixed costs'. good post, recommended blog more generally. #linklog

11 0 0 0
Preview
process vs. outcome Thesis: hockey, relative to other American sports, has a much higher percentage of players/coaches/fans who intuitively understand and think in terms of expe...

"Even when you do everything right ... you usually don't [succeed] ... all you can do is create good opportunities to [succeed]" -- this seems true in knowledge/persuasion work domains generally, e.g., teaching, research, policy, sales. Anyone have good knowledge/persuasion counterexamples? #linklog

1 0 0 0
Preview
Corporate Governance Authoritarianism In November, I was privileged to deliver the inaugural Tamar Frankel Lecture at Boston University Law School. Professor Frankel is a trailblazer in

Interesting to think about the social dynamics of a group of people enjoying great power and minimal accountability in one domain of their lives entering a domain traditionally designed around checks on power and high accountability. #linklog

8 0 2 1

some very good lines about statistical liturgy (as usual), and a reminder that science is everywhere and always a social phenomenon #linklog

22 3 1 0
Preview
The Empiricism Gap in Computer Science So far, I have argued that there is a dissonance between, on the one hand, CS’s founding myths, curricula, and self-image, and, on the other hand, the modern production of knowledge in computer scienc...

as much as I like to say that AI is a social science, I find this a reasonable argument that CS isn't quite there yet (and should get closer). on the other hand, mechanism designers' claims notwithstanding, economics could use more build-and-test if it's to build a real engineering culture #linklog

14 1 1 0

existence of Big Country Parochialism implies existence of Small Country Provincialism #linklog

1 0 1 0
Preview
Jane Austen as Applied Moral Philosopher Austen's fiction is not "about" romance. It's about character. And we need it now.

"We learn what generosity, vanity, or integrity look like by watching them play out in lived situations" -- This lens is more interesting to me lately than "Jane Austen, game theorist" (albeit complementary). Who is writing this kind of applied virtue ethics fiction particularly well today? #linklog

5 1 0 0

"Fifth, we estimate preliminary short-run price elasticities just above one, suggesting limited scope for Jevons-Paradox effects." I wonder whether the same holds for personal usage of companion chatbots. I'd expect not, they seem stickier, but it's an empirical question. #linklog

8 2 0 0
Frank’s Paper Trails eindejaarsoverzicht, niet annoteerbaar Bookmarked Paper Trails eindejaarsoverzicht (by Frank Meeuwsen) Ik wil meerdere recente postings van Frank Meeuwsen annoteren in Hypothes.is. Deze over zijn mooie boekproject, die over de achterliggende betekenis van zijn Claude Code inspanningen, en vandaag een over muziek streamen. Maar om mij onbekende redenen doet de Hypothes.is browser bookmarklet het niet op zijn site. Volgens […]
0 0 0 0
Preview
How Accurate Are Learning Curves? We’ve talked several times on this substack (as well as in my book), about the learning curve, the observation that costs of a produced good tend to fall by some constant proportion for every cumulati...

my current perhaps-crank theory is that learning curves are mostly about the people in the industry, and so their variability is largely about relevant labor market conditions #linklog

21 0 2 1
Preview
No, it’s not The Incentives—it’s you There’s a narrative I find kind of troubling, but that unfortunately seems to be growing more common in science. The core idea is that the mere existence of perverse incentives is a valid and…

every claim that "the incentives" support or deter certain kinds of behavior is also a statement about what kinds of external signals the claimant views as rewards or penalties #linklog

31 7 0 1
Preview
"AI" is bad UX teapot from the cover of Don Norman’s “The Design of Everyday Things” clumsily ‘shopped by me "AI" means bad UX There is an emergent strain of thought in...

I don't know that anyone has ever made a very good UX for sampling from high dimensional joint distributions, but etiquettes can work well in their native equilibria #linklog

15 1 1 1
Economics of Orbital vs Terrestrial Data Centers

From a quick skim this is about the best thing I've read about orbital data centers yet. Suppose orbital data centers happen as described here. What happens to near-term heavy-lift launch pricing? #linklog

5 1 1 3
Assuming that last part is accurate, I wouldn’t expect the situation to last much longer though — because once senior engineers do start figuring it out en masse, and in particular, once their employers figure it out, it’s going to be a bloodbath out there. Even the most productive juniors and mids are going to start feeling like net negatives. The blast radius is not limited to just engineers either. If I can cut a 20 person team that I manage down to 4 or 5 highly accountable and AI empowered senior engineers that report directly to me, then I don’t even need a project manager. I might not even need business analysts, because the seniors can be my domain experts. Depending on the domain, I might not even need designers.

Assuming that last part is accurate, I wouldn’t expect the situation to last much longer though — because once senior engineers do start figuring it out en masse, and in particular, once their employers figure it out, it’s going to be a bloodbath out there. Even the most productive juniors and mids are going to start feeling like net negatives. The blast radius is not limited to just engineers either. If I can cut a 20 person team that I manage down to 4 or 5 highly accountable and AI empowered senior engineers that report directly to me, then I don’t even need a project manager. I might not even need business analysts, because the seniors can be my domain experts. Depending on the domain, I might not even need designers.

it's interesting to think about the flavors of "senior thinking" -- the pattern here certainly isn't limited to software engineering. #linklog

obie.medium.com/what-happens...

6 0 1 0
Preview
Are we underestimating who is poor? Michael Green has caused a bit of a dust-up with a piece that was picked up by the Free Press, and which he followed up on at his substack newsletter.

"The high cost of housing is the cost required to get the 15 millionth household to wait." in a sense many households are tied for 15 millionth #linklog

19 6 0 3
Why AGI Will Not Happen — Tim Dettmers If you are reading this, you probably have strong opinions about AGI, superintelligence, and the future of AI. Maybe you believe we are on the cusp of a transformative breakthrough. Maybe you are skep...

a nice discussion of the physical basis of diminishing marginal returns / increasing marginal costs in compute #linklog

9 0 2 0
Context Plumbing, Intent Sensing and an AI Reverse Uno on Social Media Feeds? ## **The Enterprise Strikes Back** Fears of the AI investment bubble potentially crashing the US stock market have abated slightly, despite AI revenues not yet looking like they will be able to repay the vast sums being invested for some considerable time. But OpenAI is clearly in a vulnerable position, and **Sam Altman has declared a code red** in response to the threat posed by Google. Ben Thompson frames the story in Star Wars terms, with **OpenAI and NVidia having reached the Empire Strikes Back stage of the hero’s journey** now that Google has seemingly re-asserted its leading position across LLMs, AI apps and hardware. But whilst Nvidia is hoovering up money by selling chips, OpenAI is blowing cash in the opposite direction, and will need to come up with something very special if it is to survive and thrive beyond this initial wave. Given Altman’s talk of **superhuman persuasion** , let’s hope their saving grace is not a consumer AI advertising arms race with Google. Meanwhile, the case for enterprise AI being the route to real returns and wider economic and social benefit continues to grow. Nvidia’s Jensen Huang is continuing to build enterprise and industrial partnerships to open up new markets for their chips; **he sees industrial use cases such as digital twins and product prototyping as bigger and more important opportunities than consumer chatbots**. **Azeem Azhar began his roundup of the state of AI three years on from the launch of ChatGPT with a look at enterprise AI. **He sees very positive adoption and ROI signals that suggest this field will continue to be where AI has the greatest impact. Even looking at just Generative AI, rather than the more complicated world of agentic AI that needs a degree of organisational transformation to fulfil its promise, he sees strong adoption that suggests we are looking at a J-curve of productivity impact: > _The best example, though, is JP Morgan, whose boss Jamie Dimonsaid: “We have shown that for $2 billion of expense, we have about $2 billion of benefit.” This is exactly what we would expect from a productivity J‑curve. **With any general‑purpose technology, a small set of early adopters captures gains first, while everyone else is reorienting their processes around the technology.** Electricity and information technology followed that pattern; AI is no exception. The difference now is the speed at which the leading edge is moving._ ## **Real vs Simulated Intelligence(s)** But models are just part of the puzzle in building smarter organisations, and we should not lose sight of the respective strengths of human and machine intelligence. **Neil Perkin has shared a good summary of a recent podcast by Dave Snowden about sense-making and the impact of AI** , covering some of his observations about the differences between human and machine reasoning, cognition and insight. I also joined a longer webinar with Snowden and others interested in AI and complexity last week, where he made similarly useful points, so Neil’s notes saved me a job. > _Understanding these fundamental differences enables us to collaborate much more effectively with AI engines. LLMs can look like they have a deep understanding of a question but of course what they are really optimised for is identifying patterns and predicting the next most probable word in a sequence to mimic human-generated text. They are set up to minimise the difference from training data meaning that, by design, theytrend towards the average and most probable._ Another important difference between LLMs and human reasoning is that language is not the same as intelligence – **it is only one part of how people think and communicate their knowledge, as Benjamin Riley wrote for the Verve** : > _LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build._ If we mistake large language models and their predictive abilities as intelligence, then we risk denuding our own creative and cognitive superpowers. But perhaps if we use these stochastic parrots in more creative ways, they could play a role in helping us improve our own thinking, rather than just outsourcing it. **Advait Sarkar posed this question in a recent talk on behalf of Microsoft Research, and concluded that the idea has potential merit** : > _You can demonstrably reintroduce critical thinking into AI-assisted workflows. You can reverse the loss of creativity and enhance it instead. You can build powerful tools for memory that enable knowledge workers to read and write at speed, with greater intentionality, and remember it too. It turns out, with the right principles of design, you can build tools that are the best of both worlds: applying the awesome speed and flexibility of this technology to protect and enhance human thought._ It would be good to see some practical applications of this idea in our use of GenAI within organisations, and especially for leaders. ## **Deriving Context & Intent Needs Better Data** Another point Dave Snowden makes is that training data is ultimately more valuable and important than the individual models trained on it. This raises questions of digital sovereignty for any organisation or state trying to use AI without becoming dependent on AI platform providers like OpenAI. What should you own? What can you buy or rent? What should you build? If the current trajectory holds, it looks like open models will be commoditised and the real value will lie in data, world models and the apps and agents we build on top of the models. But whilst we can use large historical data for training models, the operational needs of context engineering mean that this category of data should ideally be recent, atomic and fluidly connected, so that it can be used in different ways. Matt Webb is thinking about this from the point of view of discerning user intent, and he uses the term **context plumbing **to describe the complex task of integrating lots of different data feeds to create context in close to real-time. He goes on to get quite excited about **the potential to derive seed training data from popular platforms and marketplaces, and then apply agentic AI coding loops to fulfil the opportunities identified in the data** (at least I think that’s what he’s saying – see what you think). It is worth reading these brain dumps alongside **Séb Krier’s recent essay Coasian Bargaining at Scale** , which postulates that personal agents (armed with your own context and intent) could do a better job of reducing transaction costs and other frictions in distributed negotiations compared to top-down approaches to navigating and balancing competing interests: > _This is the essence of the work of Nobel laureate Ronald Coase, who argued that if bargaining were cheap and easy, a polluter and their neighbor could strike a private deal without any need for regulation. Of course sometimes some pollution would still happen, but the payoff to the neighbor would ensure that both parties are better off than the zero pollution or no-limits pollution counterfactuals. The tragedy is not the existence of the conflict, but the transaction costs that prevent these mutually beneficial deals from being discovered and executed. It’s also the lesson from Elinor Ostrom, who documented how real-world communities successfully govern shared resources like fisheries and forests through their own intricate local rules._ It is an interesting idea, and one that could help shape AI-enabled governance in the future. In the context of enterprise AI, we probably need to dig deeper into how we can derive, generate or synthesise training data specific to an organisation’s work to create world models and context that are rich enough to enable agentic AI operations, and perhaps even the kind of negotiated outcomes and compromises that Séb Krier has in mind. This is not just a quantity question; it is also about how we structure and organise that data. Microsoft are doing some work on the semantic layer that helps people and agents make sense of data with what they are calling **Microsoft IQ, which is intended to bring intelligent capabilities to Fabric, Microsoft 365, and Azure AI Search**. Another angle on harnessing data intelligently is to democratise access to it, so that more people can help shape it, **and that is what Atlassian appear to be targeting with their acquisition of data cataloguing tool Secoda**. ## **Could Agentic AI Play the Reverse Uno Card on Social Media?** Séb Krier’s piece is another reminder that personal agents are likely to emerge as solutions to many of the coordination challenges that led us down the perilous path of large-scale platforms and algorithmic sharing. I am in Copenhagen right now at the pre-launch gathering of a bold project to rebuild Europe’s social platforms. It aims to build on the energy and creativity that we were all so excited about in the early 2000’s before Facebook and the big US platforms exploited our human need for connection to create ad-funded clickbait farms that have harmed our societies and democracies. Just today, the Guardian wrote about **a growing movement of young people across Europe seeking to reclaim their lives from big tech platforms** , and this trend looks set to grow. Within the Matrix world of attention farming, we have seen the bad things that AI can do: algorithmic feeds, emotional manipulation, fake content, fake people, and so on. But what if it could also be part of re-humanising our connection with each other? There is a whole (small) world out there of people sharing their passions in niche social networks and communities, subreddits, discords or group chats. But the nature of scale-free networks and network effects means that Whatsapp, Facebook, Twitter, etc are still the easiest option for many people and groups in Europe just because that’s where their friends or families are to be found. But what if we go back to some of those early social network ideas such as federation, interoperability and **the intention economy** to play a reverse Uno on algorithmic feeds? If everybody has their own discoverability and curation agent that pulls from multiple networks, communities and messaging platforms to create a personal social feed, then we don’t need to all be on the same platform. If I can tell my agent to keep me updated with all my interests and groups, from local news to hobbies and political debates, and handle the messy details of logging in and aggregating content, then perhaps we could help sustain the safer, more human-scale small world networks that are out there already under the radar. Ever the optimist!

Context Plumbing, Intent Sensing and an AI Reverse Uno on Social Media Feeds? The Enterprise Strikes Back Fears of the AI investment bubble potentially crashing the US stock market have abated slig...

#AI #Innovation #Lee #Linklog #Technology

Origin | Interest | Match

0 0 0 0
576 - Using LLMs at Oxide / RFD / Oxide

one thing that stands out to me here is the emphasis on how different ways of using the tool affects the feel (vibe?) and function of the team. #linklog

2 0 0 0
Preview
MPG Standards: Some Perspective It takes a long time for flows to change stocks

TIL only 15-17m LDVs are added per year to a stock of 295m. If the average lifespan is 12 years, have we passed peak LDV? #linklog

5 1 2 1
Seeing like a software company The big idea of James C. Scott’s Seeing Like A State can be expressed in three points: Modern organizations exert control by maximising “legibility”: by…

I would like to read 1-5k words about the use of legibility and illegibility inside consultancies, think tanks, and other knowledge production orgs #linklog

12 0 1 0
Preview
ways of vibing treatments and responses in the real world

"if you want a theory of economic vibes, it is vibes you need to be theorising about." 'epistemic collapse' could be part of a theory of economic vibes, but i think not all of it. a theory of vibes seems like it would be a more sophisticated theory of expectations. #linklog

1 0 0 0
Preview
How A Lone Hacker Shredded the Myth of Crowdsourcing High-tech analysis of a 2011 DARPA Challenge suggests that far from being wise, crowds can’t be trusted

this is framed as being about the un-wisdom of crowds but it really seems to be about the challenges of distributed trust and alignment. in fairness I suppose wisdom is often presumed to be benign. #linklog

3 1 1 0
Linklog 1

collecting my #linklog posts in batches here

6 1 1 1
Preview
AI Replacement and Wash Hires What could be causing cycles of hiring and firing in the labour market?

I like this term "wash hire". data drift, particularly for firm-specific knowledge, seems a reasonable driver of wash hiring. #linklog

10 1 0 0

"The Bayesian approach, despite its virtues, changes the topic" made me laugh. really nice paper, much on the value of suitably constrained methodological anarchy #linklog

11 1 1 2