Andrea De Mauro, PhD's Avatar

Andrea De Mauro, PhD

@ademauro.com

Data Analytics and AI Exec Advisor and Founder | Book author, published by Pearson, FT, Packt, Apogeo | Professor of AI, Business and Marketing Analytics | Former Exec at Vodafone and P&G | ademauro.com | linkedin.com/in/andread

768
Followers
582
Following
31
Posts
23.10.2023
Joined
Posts Following

Latest posts by Andrea De Mauro, PhD @ademauro.com

Special Track #26-2026 | IFKAD This track will investigate the transformative role of sustainable digital platforms (KolK and Ciulli, 2020;ย  Bonina et al., 2021; Hellemans et al., 2022) as

This is why we are chairing a Special Track at IFKAD 2026.
Enabling Hybrid Knowledge. Digital Platforms for Human AI Synergy in Sustainable Organizations.

Extended abstracts. 500 to 1000 words.
Deadline. January 31st, 2026.

Details and submission info here:
www.ifkad.org/special-trac...

21.01.2026 09:12 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

I see how companies across industries run AI programs.

One pattern is consistent.
The digital platform makes the difference.

Teams that build modular platforms for human and AI collaboration scale faster. They measure business value more clearly. They innovate with less friction.

21.01.2026 09:12 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Can you think of additional application areasโ€”or examples youโ€™re already seeing in practice? Please add your perspective!

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

8. Media optimization โ€“ Generating, testing, and optimizing ads and creatives at scale.

I love working on research-backed taxonomies because they let companies self-assess where they are, what they already do, and what they should start exploring in 2026!

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

6. Creativity boosting โ€“ Supporting ideation, concept generation, and creative exploration. Not necessarily replacing agencies but making the process faster and cheaper.
7. Dynamic pricing โ€“ Simulating scenarios and supporting adaptive pricing strategies which optimizes promo spending.

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐Ÿ”น Business-facing
5. Market research โ€“ Synthesizing large volumes of unstructured data into actionable insights. GenAI is the new tool in the kit of Market Researchers!

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

4. Content personalization โ€“ Adapting content formats, tone, and narratives to each user and moment. This includes adapting content for AI agents (Generative Engine Optimization).

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

2. Customized recommendations โ€“ Dynamically suggesting products, content, or offers based on context and behavior
3. Customer service chatbots โ€“ Conversational agents that support, guide, and engage customers in real time

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

We group them into consumer-facing and business-facing uses:

๐Ÿ”น Consumer-facing
1. Hyper-personalized communication โ€“ Tailoring messages, creatives, and touchpoints at the level of the individual customer

26.12.2025 11:36 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

We developed a practical taxonomy of Generative AI applications in marketing, identifying 8 core application areas that we observed are systematically reshaping how brands create value.

26.12.2025 11:36 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Very happy to share new research I had the pleasure to publish with Andrea Sestino and Luigi Nasta, now included in the Encyclopedia of Artificial Intelligence in Marketing, @springer.springernature.com.

26.12.2025 11:36 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Diagram illustrating a taxonomy of Generative AI applications in marketing. At the top is โ€œGenAI in Marketing,โ€ branching into two domains: Consumer-facing and Business-facing. Consumer-facing applications include โ€œImprove Shopping Fundamentalsโ€ (with Hyper-personalized Communication and Customized Recommendations) and โ€œImprove Consumption Experienceโ€ (with Customer Service Chatbots and Content Personalization). Business-facing applications include โ€œImprove Decision Makingโ€ (with Market Research and Creativity Boosting) and โ€œImprove Financial Outcomesโ€ (with Dynamic Pricing and Media Optimization). Icons visually represent each of the eight application areas.

Diagram illustrating a taxonomy of Generative AI applications in marketing. At the top is โ€œGenAI in Marketing,โ€ branching into two domains: Consumer-facing and Business-facing. Consumer-facing applications include โ€œImprove Shopping Fundamentalsโ€ (with Hyper-personalized Communication and Customized Recommendations) and โ€œImprove Consumption Experienceโ€ (with Customer Service Chatbots and Content Personalization). Business-facing applications include โ€œImprove Decision Makingโ€ (with Market Research and Creativity Boosting) and โ€œImprove Financial Outcomesโ€ (with Dynamic Pricing and Media Optimization). Icons visually represent each of the eight application areas.

Leveraging GenAI is the new norm for brand building.
And 2025 made this crystal clear, at least to me and to the companies I'm working most closely with.

26.12.2025 11:36 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

RAG is hardly as โ€œsimpleโ€ as it looks.
In AI Applications Made Easy, I explain why real-world RAG means more than uploading PDFs and pressing a button. It takes structure, context, business understanding. Now live on @manningbooks.bsky.social (discounted today โ€” www.manning.com/dotd?a_aid=d...).

15.07.2025 10:12 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Deal of the Day Manning is an independent publisher of computer books, videos, and courses.

Interested in learning how to build no-code Agents, RAG, and Advanced Prompts? Check out the New MEAP Deal of the Day on April 24: 45% discount on my new book, "Generative AI Made Easy," along with other carefully selected titles! @manningbooks.bsky.social www.manning.com/dotd?a_aid=d...

24.04.2025 07:29 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Video thumbnail

#LearnSora Prompt: "A Labrador dog with a red tie giving a lecture on marketing at LUISS university in Romeโ€
". Critique: You can see pinus pinaster, the "maritime pine", typical of the Rome area, outside the window. How many tails has the Labrador got?
#OpenAI #Sora

22.12.2024 11:30 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Thank you to: Simone Malacaria Daniele Diano Daniela Meo Giorgia Guasti Fabio Pistilli Mario Miozza Anna Maria La Marra

Program directors: Irene Finocchi Paolo Peverini

#Analytics #ArtificialIntelligence #DataDrivenLeadership #BusinessEducation #FutureSkills

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

A heartfelt thanks to my amazing teaching assistants and the incredible companies, including Fater and P&G, who shared their Analytics use cases with our classes. Your contributions enriched our learning experience and gave students a glimpse into real-world applications! #DayOne #PeopleFirst

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐Ÿ™Œ So many people to thank now... Iโ€™m immensely proud of my students for their curiosity, creativity, and resilience in tackling these challenges. I got them out of their comfort zone, and they made it through!

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐ŸŽ“ During this semester at Luiss Guido Carli University, I taught nearly 150 brilliant students among master's students in Marketing and bachelor's students in Management and AI. Many of them will be looking for their first job soon - they are precious material, so hire them if you can!

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

So the question is: are businesses ready for this shift? Are they taking care of their employees' Data and AI fluency? In most cases... unfortunately, not yet!

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

3๏ธโƒฃ ๐—ง๐—ต๐—ฒ ๐—ฐ๐—ผ๐—ป๐˜ƒ๐—ฒ๐—ฟ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ผ๐—ณ ๐—ฏ๐˜‚๐˜€๐—ถ๐—ป๐—ฒ๐˜€๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—”๐—œ ๐—ถ๐˜€ ๐—ต๐—ฒ๐—ฟ๐—ฒ: To thrive, companies must support these competencies and be ready for a workforce equipped with these skills. Universities play a critical role in preparing students, but learning in this field truly never ends.

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

2๏ธโƒฃ ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ถ๐˜€ ๐—ถ๐—ป๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐—ฏ๐—น๐—ฒ: My students built RFM models from scratch, designed AI-driven geo-marketing recommendations, and translated numbers into consumer insights and data stories using tools like KNIME and Power BI. Their first-person experience will make them better leaders.

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Key takeaways I personally got to reflect on:

1๏ธโƒฃ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—บ๐—ฎ๐—ป๐—ฎ๐—ด๐—ฒ๐—ฟ๐˜€ ๐—ป๐—ฒ๐—ฒ๐—ฑ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐˜€๐—ธ๐—ถ๐—น๐—น๐˜€: Understanding what AI is (and isnโ€™t) is critical, as well as learning its organisational and cultural enablers. Managers-to-be must understand what's at stake and navigate data and AI topics with fluency.

22.12.2024 09:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Andrea De Mauro, teaching his students at LUISS University in Rome.

Andrea De Mauro, teaching his students at LUISS University in Rome.

Post image Post image

Teach a marketing student how to build an AI model from scratch, and magic will happen! ๐Ÿช„โœจ

During my last semester at LUISS University, I witnessed firsthand how capable and creative students can be when they are empowered to merge management, data analytics, and AI into tangible business results.

22.12.2024 09:21 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
OpenScholar: The open-source A.I. thatโ€™s outperforming GPT-4o in scientific research OpenScholar, an innovative AI system by Allen Institute for AI and University of Washington, revolutionizes scientific research by processing 45 million papers instantly, offering researchers citation-backed answers and challenging proprietary AI systems.

Reading about #OpenScholar takes me back to those long nights in the library during my PhD. How times have changed! #AI like this is revolutionising how researchers access and synthesise literature. Can we even imagine research without AI now? #Research #LiteratureReview #AcademicSky

24.11.2024 08:33 ๐Ÿ‘ 13 ๐Ÿ” 4 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 1
The image presents a framework titled "The Four Levels of RAG" (Retrieval Augmented Generation), which outlines the progressive stages of LLM (Large Language Model) capabilities in integrating external knowledge repositories for enhanced reasoning. The framework spans from "Basic RAG" to "Advanced RAG," showcasing increasing value and complexity over four levels.

Level 1: Explicit Fact Retrieval
Basic RAG involves locating specific facts within a repository to provide answers. External facts, represented as "chunks," are processed by an LLM. This level is nearly out-of-the-box and requires minimal customization. It has some positive impact.

Level 2: Implicit Fact Identification
The second level integrates facts dispersed across multiple chunks within a repository. LLMs piece together connected information (e.g., facts "F1, F2, F3") derived from content "C." This level moves beyond simple retrieval by establishing contextual relationships between facts.

Level 3: Explicit Rationale Application
Advanced RAG begins here. The LLM applies provided rationale frameworks to integrate dispersed facts and generate answers. External rationales are represented through flowcharts or structured logic diagrams, enabling a deeper layer of reasoning.

Level 4: Hidden Rationale Identification
The most advanced stage, where LLMs analyze facts in a repository and infer hidden rationales to generate answers. An example involves solving a "24-point game," where basic arithmetic operations (e.g., addition, multiplication) are combined strategically to reach the solution. This level requires highly curated knowledge and customized implementations.

The chart visually represents value (vertical axis) versus complexity/time to production (horizontal axis). A gradient indicates increasing impact and customization needs across the levels. An upward arrow shows growing potential value.

The framework is adapted from Zhao et al., "Retrieval Augmented Generation (RAG) and Beyond," 2024.

The image presents a framework titled "The Four Levels of RAG" (Retrieval Augmented Generation), which outlines the progressive stages of LLM (Large Language Model) capabilities in integrating external knowledge repositories for enhanced reasoning. The framework spans from "Basic RAG" to "Advanced RAG," showcasing increasing value and complexity over four levels. Level 1: Explicit Fact Retrieval Basic RAG involves locating specific facts within a repository to provide answers. External facts, represented as "chunks," are processed by an LLM. This level is nearly out-of-the-box and requires minimal customization. It has some positive impact. Level 2: Implicit Fact Identification The second level integrates facts dispersed across multiple chunks within a repository. LLMs piece together connected information (e.g., facts "F1, F2, F3") derived from content "C." This level moves beyond simple retrieval by establishing contextual relationships between facts. Level 3: Explicit Rationale Application Advanced RAG begins here. The LLM applies provided rationale frameworks to integrate dispersed facts and generate answers. External rationales are represented through flowcharts or structured logic diagrams, enabling a deeper layer of reasoning. Level 4: Hidden Rationale Identification The most advanced stage, where LLMs analyze facts in a repository and infer hidden rationales to generate answers. An example involves solving a "24-point game," where basic arithmetic operations (e.g., addition, multiplication) are combined strategically to reach the solution. This level requires highly curated knowledge and customized implementations. The chart visually represents value (vertical axis) versus complexity/time to production (horizontal axis). A gradient indicates increasing impact and customization needs across the levels. An upward arrow shows growing potential value. The framework is adapted from Zhao et al., "Retrieval Augmented Generation (RAG) and Beyond," 2024.

Exploring the Four Levels of RAG (Retrieval Augmented Generation): From basic fact retrieval to uncovering hidden rationales, this framework shows how LLMs evolve to provide deeper insights. Adapted from Zhao et al., 2024.
๐Ÿ“„ Learn more: arxiv.org/abs/2409.14924
#AI #DataAnalytics #GenerativeAI

17.11.2024 18:31 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
The image presents a framework titled "The Four Levels of RAG" (Retrieval Augmented Generation), which outlines the progressive stages of LLM (Large Language Model) capabilities in integrating external knowledge repositories for enhanced reasoning. The framework spans from "Basic RAG" to "Advanced RAG," showcasing increasing value and complexity over four levels.

Level 1: Explicit Fact Retrieval
Basic RAG involves locating specific facts within a repository to provide answers. External facts, represented as "chunks," are processed by an LLM. This level is nearly out-of-the-box and requires minimal customization. It has some positive impact.

Level 2: Implicit Fact Identification
The second level integrates facts dispersed across multiple chunks within a repository. LLMs piece together connected information (e.g., facts "F1, F2, F3") derived from content "C." This level moves beyond simple retrieval by establishing contextual relationships between facts.

Level 3: Explicit Rationale Application
Advanced RAG begins here. The LLM applies provided rationale frameworks to integrate dispersed facts and generate answers. External rationales are represented through flowcharts or structured logic diagrams, enabling a deeper layer of reasoning.

Level 4: Hidden Rationale Identification
The most advanced stage, where LLMs analyze facts in a repository and infer hidden rationales to generate answers. An example involves solving a "24-point game," where basic arithmetic operations (e.g., addition, multiplication) are combined strategically to reach the solution. This level requires highly curated knowledge and customized implementations.

The chart visually represents value (vertical axis) versus complexity/time to production (horizontal axis). A gradient indicates increasing impact and customization needs across the levels. An upward arrow shows growing potential value.

The framework is adapted from Zhao et al., "Retrieval Augmented Generation (RAG) and Beyond," 2024.

The image presents a framework titled "The Four Levels of RAG" (Retrieval Augmented Generation), which outlines the progressive stages of LLM (Large Language Model) capabilities in integrating external knowledge repositories for enhanced reasoning. The framework spans from "Basic RAG" to "Advanced RAG," showcasing increasing value and complexity over four levels. Level 1: Explicit Fact Retrieval Basic RAG involves locating specific facts within a repository to provide answers. External facts, represented as "chunks," are processed by an LLM. This level is nearly out-of-the-box and requires minimal customization. It has some positive impact. Level 2: Implicit Fact Identification The second level integrates facts dispersed across multiple chunks within a repository. LLMs piece together connected information (e.g., facts "F1, F2, F3") derived from content "C." This level moves beyond simple retrieval by establishing contextual relationships between facts. Level 3: Explicit Rationale Application Advanced RAG begins here. The LLM applies provided rationale frameworks to integrate dispersed facts and generate answers. External rationales are represented through flowcharts or structured logic diagrams, enabling a deeper layer of reasoning. Level 4: Hidden Rationale Identification The most advanced stage, where LLMs analyze facts in a repository and infer hidden rationales to generate answers. An example involves solving a "24-point game," where basic arithmetic operations (e.g., addition, multiplication) are combined strategically to reach the solution. This level requires highly curated knowledge and customized implementations. The chart visually represents value (vertical axis) versus complexity/time to production (horizontal axis). A gradient indicates increasing impact and customization needs across the levels. An upward arrow shows growing potential value. The framework is adapted from Zhao et al., "Retrieval Augmented Generation (RAG) and Beyond," 2024.

Exploring the Four Levels of RAG (Retrieval Augmented Generation): From basic fact retrieval to uncovering hidden rationales, this framework shows how LLMs evolve to provide deeper insights. Adapted from Zhao et al., 2024.
๐Ÿ“„ Learn more: arxiv.org/abs/2409.14924
#AI #DataAnalytics #GenerativeAI

17.11.2024 18:31 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Hi @mehdioudjida.bsky.social , I'd be happy to be added! Thanks

17.11.2024 17:41 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Hey @annamillsoer.bsky.social , I'd love to be added! My latest book, published by Pearson, covers AI in the workplace and all that comes with it, including regulation and governance. Thanks!

17.11.2024 17:37 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Hi @hetanshah.bsky.social, I'd love to be added! Thanks

17.11.2024 17:31 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0