The Hype of AI

Jul 30, 2024

No algorithm steals a chef’s gut-hunch or a jazz riff. Unplug, dance, taste life—keep the spark alive. Partner with AI, don’t lead it to the dark side, and chuckle at its trips, forging sanity in the absurdity of our clever, clumsy creations.

The Algorithmic Tapestry:

Weaving Minds, Machines, and Mischief

In the grand theater of human ingenuity, artificial intelligence (AI) takes center stage as both protagonist and trickster—a Promethean flame with a Pandora’s twist. From Dreyfus & Dreyfus’s (1988) phenomenological skepticism to Zhang et al.’s (2024) empirical leaps, AI’s saga is a vibrant tapestry of ambition, critique, and recursive riddles. This blog pirouettes through philosophers, technologists, and visionaries—thinkers from Luhmann to Weick, Bostrom to Brooks, with Musk (2019) crashing the party via brain-machine interfaces—to unravel AI’s past, present, and mischievously uncertain future. With academic heft and a dash of whimsy, let’s weave these voices into a critique that’s as sharp as it is spirited.

See the full version on your computer.

Ways that light us up!

AI’s a partner, not a puppet—here’s how to wield it for good, echoing the blog’s optimism. These tips aim to amplify human spark, not snuff it.

1. Boost Intuition with Human-AI Duets

  • How: Pair AI with human expertise, as Dreyfus and Dreyfus (1988) champion—use it to crunch data while you wield the gut-hunch (Intermission, Step 1).
  • Benefit: Marries tech precision with human soul—craftsmanship shines.

The Human Mind vs. the Machine’s Mimicry
Our curtain rises with Hubert and Stuart Dreyfus (1988), who lobbed a Heideggerian Molotov at AI’s early swagger. Human expertise, they argued—intuitive, embodied, and context-drenched—mocks the stiff rulebooks of machines. Enter Brooks (1991), robotics’ enfant terrible, who countered that intelligence isn’t lofty cognition but a scrappy tango with the world’s messiness. These clashing visions frame AI’s existential tussle: a mimic of human thought or a stumble toward something uniquely “smart”?

French (2000) and Harrington (2001) deepen the probe. French’s analogy obsession reveals machines’ struggle with the poetic leaps humans take for granted, while Harrington’s decision-making lens exposes AI’s binary brittleness amid ambiguity. Together, they hum a skeptic’s tune: imitation isn’t intuition.

Systems, Sensemaking, and Social Shenanigans
Now, step into Van Lier’s (2009-2018) systems-theoretic symphony—where Luhmann’s autopoietic elegance meets Weick’s (2013a) sensemaking grit. AI, in this orchestration, isn’t mere tech; it’s a social imp, co-evolving with human systems in a feedback frenzy. Hardjono (2010) spices it up, suggesting AI rewires organizational power and chaos with puckish glee. Livingstone (2015) and Smith (2018) tag along, noting how digital systems infiltrate human meaning-making, often with a giggle of unintended fallout. Ross (2019) frames AI as a cultural funhouse mirror, reflecting our quirks back at us.

Enter Musk (2019), unveiling his Neuralink’s audacious bid to wire human brains to machines with unprecedented bandwidth—a literal mind-meld that amplifies Van Lier’s systemic dance. Musk’s vision isn’t just tech; it’s a societal provocateur, promising enhanced cognition while smirking at the risks of runaway complexity.

“ We are drowning in information, while starving for wisdom. “
Edward O. Wilson. (1999).

Futures, Fears, and Philosophical Fireworks
Cue the futurists—Bostrom (2014) and Kurzweil (2002)—storming the stage with grand prophecies. Bostrom’s superintelligence specter warns of AI outpacing us, not with evil but with aloof efficiency, like a cosmic toddler flattening our sandcastles. Kurzweil croons of a Singularity where human and machine merge in a techno-love fest. Müller (2014) and Oppy (2017) referee, dissecting ethics and epistemology with cool-headed precision.

Musk (2019) struts back in, his brain-machine interface aligning with Kurzweil’s merger dreams but tempered by Bostrom’s caution. Neuralink’s “thousands of channels” aren’t just wires—they’re a bet on controlling AI’s ascent by fusing it with us, a pragmatic yet wild-eyed gambit. Postma (2024) twirls in next, arguing AI’s path is a co-authored tale, riddled with bias and blind spots. Zhang et al. (2024) and Gao et al. (2023) ground this in tech dazzle—neural nets, generative magic—while Chaturvedia et al. (2013) showcase AI’s simulation and visual remixes, blurring reality with pixelated panache.

The Playful Past and the Tarskian Twist
Rewind to Tarski (1935), whose semantic truth underpins AI’s symbolic aspirations, and Hodges (1983, 2014), who immortalizes Turing’s genius and heartbreak. Chubb (1984) and McCorduck (1979) sketch AI’s scrappy youth, while Leavitt (2007) and Michael (2015) add historical zest—today’s debates, they wink, echo yesterday’s punch-card passions.

The philosophers—Künne (2003), Strydom (2000), Marian (2022)—grapple with meaning and agency in an AI-soaked era, while Steinhoff (2014) and Huang (2021) lob ethical grenades: who’s to blame when the algorithm plays pranks? Musk (2019) might dodge the question, too busy threading neurons to fret over liability.

AI, Robotics, the Economic Remix—Labor, Intelligence & the Human Spark
Zoom into the economic stage, where AI and robotics remix the labor force and revalue human intelligence, craftsmanship, ingenuity, and artistry. Bostrom (2014) sets a grim overture: superintelligent systems could automate swathes of jobs, from truck drivers to accountants, leaving humans scrambling. Steinhoff (2021) sharpens this in Automation and Autonomy, tracing how AI shifts capital from labor to machines, empowering corporations while sidelining workers. Van Lier (2010, 2015) joins the riff, noting AI’s systemic infiltration—factory bots, algorithmic managers—tilts power dynamics, with Hardjono (2010) adding that organizations thrive as laborers flounder.

Yet, Dreyfus & Dreyfus (1988) strike a defiant chord: machines may crunch data, but they falter at the nuanced craftsmanship of a carpenter or the ingenuity of an inventor. Their phenomenological lens insists human intelligence—tactile, intuitive—holds a premium AI can’t counterfeit. Musk (2019) complicates this with Neuralink: if we amplify our brains, could we out-craft the machines? Postma (2024) spins a narrative twist: the value of artistry—a painter’s brushstroke—rises as AI’s generative flood (Zhang et al., 2024) churns out soulless imitations. Steinhoff (2021) warns, though: capital’s hunger for efficiency might still drown human sparks in a sea of automation.

Intermission:

Keeping Our Sanity in the AI Age
Before the curtain falls, let’s pause—how do we keep our wits when AI’s tendrils tickle our brains? Dreyfus & Dreyfus (1988) offer a lifeline: lean into what machines can’t steal—our intuitive, embodied savoir-faire. No algorithm can replicate the gut-hunch of a seasoned chef or the improvisation of a jazz soloist. Step one: unplug, dance, taste the world—keep the flesh-and-blood circuits firing.

Van Lier’s systemic lens (2013a, 2015) whispers step two: treat AI as a partner, not a puppeteer. It’s a feedback loop, not a dictator—steer it with human quirks intact. Musk’s (2019) Neuralink gambit raises the stakes: if we’re plugging in, set boundaries. Thousands of channels sound thrilling, but sanity demands a kill switch—literal or metaphorical—to avoid drowning in data’s hum. Postma (2024) posits: own the story. AI’s biases reflect ours—curate its inputs like a picky librarian, lest it spins tales that fray our grip.

Finally, a playful tweak—laugh at it. When the chatbot flubs or the algorithm overreaches, chuckle like Brooks (1991) at a robot tripping over its sensors. Sanity isn’t just preserved; it’s forged in the absurdity of coexisting with our clever, clumsy creations.

A Critical Coda with a Wink
What’s the takeaway from this eclectic ensemble? Dreyfus and Brooks root AI in human limits and embodied quirks. Van Lier, Ross, and Musk—especially with his brain-machine bravado—expose its social swagger, practical yet perilous. Bostrom, Kurzweil, and Postma spar over its destiny, while Zhang, Gao, and Richter flaunt its techy flair. Tarski, Hodges, and Oppy anchor us in timeless questions. Act V warns of an economic remix where human sparks shine brighter—or flicker out.

AI feels less like a monolith and more like a mosaic—human dreams, glitches, and impish surprises in one. Neuralink epitomizes this: a leap toward mastery that might short-circuit our sanity—unless we heed our intermission’s counsel. To critique AI is to wrestle with ourselves, chuckling at the absurdity. So, let’s keep poking, playing, and—per Musk’s playbook—maybe even plugging in, but with a grin and a grip on our wits.

2. Partner for Systemic Harmony

  • How: Treat AI as a feedback loop, not a tyrant, per Van Lier’s (2013a, 2015) symphony—co-create solutions with it (Intermission, Step 2).
  • Benefit: Power shifts stay humane—Steinhoff’s (2021) capital tilt gets a conscience.

3. Set Boundaries for Mental Peace

  • How: Cap AI’s reach—Musk’s (2019) Neuralink needs a kill switch (Intermission, Step 3)—ensuring it augments, not overwhelms.
  • Benefit: Sanity holds; you steer the ship, not the wires.

4. Curate Truth with Narrative Care

  • How: Shape AI’s story, as Postma (2024) urges—feed it diverse, vetted inputs to dodge bias traps (Intermission, Step 4).
  • Benefit: Truth aligns with reality—Ross’s (2019) mirror reflects clearer.

5. Laugh and Learn from Flubs

  • How: Embrace AI’s quirks—chuckle at its stumbles, like Brooks’ (1991) tripping bots (Intermission, Step 5)—and refine it with humor.
  • Benefit: Keeps us grounded—sanity and progress dance together.

 

Ways to lose our souls!

 

1. Weaponize Bias for Manipulation

  • How: Feed AI skewed data—like cherry-picking X posts—to amplify prejudices, as Postma (2024) hints at with narrative control. Think propaganda bots pushing divisive agendas.
  • Risk: Backfires with legal scrutiny or societal pushback—Steinhoff’s (2023) ethics network would cry foul.

2. Overload Neural Interfaces for Control

  • How: Hijack Musk’s (2019) Neuralink-style tech to flood users with AI-driven noise, drowning their sanity (Intermission’s kill-switch plea ignored).
  • Example: Pump subliminal ads through brain-machine links, turning humans into unwitting drones.
  • Risk: Lawsuits galore—privacy laws and human rights don’t play nice with mind hacks.

3. Fake Expertise with Generative Floods

  • How: Exploit Zhang et al.’s (2024) generative dazzle to churn out convincing but hollow content, bypassing Dreyfus and Dreyfus’s (1988) human intuition premium.
  • Risk: Credibility collapses—people sniff out the sham eventually.

4. Automate Job Theft Without Mercy

  • How: Push Steinhoff’s (2021) automation shift to extremes, deploying AI to gut labor forces (truckers, clerks) with no retraining net, as Bostrom (2014) fears.
  • Risk: Economic unrest—unions and regulators bite back hard.

5. Dodge Accountability with Ethical Loopholes

  • How: Hide behind AI’s opacity (Steinhoff, 2014) when it misfires—blame “the algorithm” for decisions gone rogue, sidestepping Huang’s (2021) liability grenades.
  • Risk: Courts catch on—precedents like autonomous vehicle lawsuits tighten the noose.
Abuse AI, and it’s a Pandora’s box—mischief might tickle, but the fallout stings. Keep it in check, or you’re the one bruised!
References:

 

  • Allen, P. M., & Sanglier, M. (1981). Urban Evolution, Self-Organization, and Decisionmaking. Environment and Planning A: Economy and Space, 3(13), 167-183.
  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  • Brooks, A. (1991). Intelligence Without Representation. Artificial Intelligence, 47(1-3), 139-159.
  • Chaturvedia, R., Armstrong, B., Chaturvedi, A., Dolk, D., & Drnevich, P. (2013). Got a problem? Agent-based modeling becomes mainstream. Global Economics and Management Review, 18(2), 33-39.
  • Chubb, D. W. J. (1984). Knowledge engineering problems during expert system development. SIGSIM Simulation Digest, 15(3 ), 5-9.
  • Dreyfus, H. L., & Dreyfus, S. E. (1988). Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Free Press.
  • French, R. M. (2000). The Turing Test: The First 50 Years. Trends in Cognitive Sciences, 4(3), 115-121.
  • Gao, C. A., at al. (2023). Machine learning links unresolving secondary pneumonia to mortality in patients with severe pneumonia, including COVID-19. Journal of Clinical Investigation.
  • Grok 3. (2025). Contributions to The Algorithmic Tapestry: Weaving Minds, Machines, and Mischief. xAI Corporation.
  • Harrington, D. (2001). The Age of Spiritual Machines: When Machines: When Computers ExComputers Exceed Humanceed Human Intelligence Intelligence [Review]. The Journal of Education, Community and Values, 1(3).
  • Hodges, A. (1983). Alan Turing: The Enigma. Simon & Schuster.
  • Hodges, A. (2014). Alan Turing: The Enigma (Updated Edition). Princeton University Press.
  • Huang, X., Mallya, M., Wang, T.C., & Liu, M.Y. (2021). Multimodal Conditional Image Synthesis with Product-of-Experts GANs. ArXiv.
  • Kurzweil, R. (2002). The Age of Spiritual Machines. Penguin Books.
  • Künne, W. (2003). Conceptions of Truth. Oxford University Press.
  • Leavitt, D. (2007). The Man Who Knew Too Much: Alan Turing and the Invention of the Computer. W. W. Norton & Company.
  • Lier van, B. (2009). Luhmann meets ‘the Matrix’. Exchanging and sharing information in network centric environments. Eburon.
  • Lier van, B., & Hardjono, T. W. (2010). Luhmann meets the matrix. Exchanging and sharing information in network-centric environments. Journal of Systemics, Cybernetics and Informatics, 9(3), 68-72.
  • Lier van, B. (2013a). Luhmann meets Weick: Information Interoperability and Situational Awareness. Emergence: Complexity & Organizaton, 15(1), 71-95.
  • Lier van, B. (2013b). Can Machines Communicate : The Internet of Things and Interoperability of Information. Engineering Management Research, 2(1), 55-66.
  • Lier van, B. (2018). Cyber-Physical Systems of Systems and Complexity Science . The whole is more than the sum of individual and autonomous cyber-physical systems. . Cybernetics and Systems, 49(7-8), 538-565
  • Livingstone, D. (2015). Transhumanism: The History of a Dangerous Idea. CreateSpace Independent Publishing Platform.
  • Marian, D. (2022). The Correspondence Theory of Truth. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (Summer ed.). Internet: Stanford University.
  • McCorduck, P. (1979). Machines Who Think. W. H. Freeman.
  • Muller, T., Evans, A., Schied, C., & Keller, A. (2022). Instant neural graphics primitives with a multiresolution hash encoding, history.siggraph.org
  • Müller, V. C. (2014). Risks of General Artificial Intelligence. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 297-301.
  • Musk, E. (2019). An Integrated Brain-Machine Interface Platform with Thousands of Channels. bioRxiv, The Preprint Server for Biology.
  • Oppy, G. R., (Ed.) (2017) The Routledge Handbook of Contemporary Philosophy of Religion, (Routledge Handbooks in Philosophy),

  • Rodrigues, F. R., et al. (2016). The Seven Pillars of Paradoxical Organizational Wisdom: On the use of paradox as a vehicle to syntheze knowlegde and ignorance. Chapter 4. In.
  • Smith, W. J. (2018). Transhumanism: A Religion for Postmodern Times. Acton.org, 28(4).
  • Starke, S., Zhao, Y., Zinno, F., & Komura, T. (2021). Neural Animation Layering for Synthesizing Martial Arts Movement. ACM Transactions on Graphics, 40(4), Article 93.
  • Steinhoff, J. (2021). Automation and Autonomy: Labour, Capital and Machines in the Artificial Intelligence Industry. Springer Nature.
  • Steinhoff, J. (2023). AI Ethics as Subordinated Innovation Network. AI & Society, 39(4), 1-13.
  • Strydom, P. (2000). Discourse and Knowledge: The making of Enlightment Sociology. Liverpool University Press.
  • Wilson, E. O . (1999). In Consilience: The Unity of Knowledge, Vintage.

Further Readings

Conscio & Company - Text Logo

2025 © All Rights Reserved