The malpractice of AI industry
The dangerous rise of false authority in AI, and how it’s quietly making us dumber.
🚨 Alert. This article is dangerously true.
It might make you uncomfortable. That’s the point.
It’s Sunday, take your coffee, and prepare to digest something you won’t see in any viral thread or “AI creator” carousel.
Because what’s happening right now in the AI industry isn’t innovation.
It’s malpractice.
According to the dictionary, “malpractice means a dereliction of professional duty or a failure to exercise an ordinary degree of professional skill or learning by one (such as a physician) rendering professional services that result in injury, loss, or damage”
In the AI industry, we’re witnessing a form of malpractice that’s just as dangerous, only this time, the damage is more challenging to see. It’s not happening in hospitals but in codebases, investor meetings, classrooms, and LinkedIn posts.
Imagine going to a doctor whose only qualification was a YouTube video and a certificate they bought online. Or trusting a fitness coach who became “certified” after a weekend crash course, the kind where you pay, post a selfie, and get a badge. That’s what the AI world is starting to look like.
Everyone’s suddenly an expert. Everyone’s building AI. Everyone’s selling a course. And yet, under the surface, very few truly understand the complexity behind what they’re preaching. The illusion of simplicity is being sold as a product, and it’s dangerous.
Because no, it’s not easy.
If a doctor performs the wrong surgery, it’s called malpractice.
But in tech, when someone spreads false information, sells fake success stories, and influences thousands, what do we call that?
Just like in medicine, where a misdiagnosis can harm a patient, misinformation can rot your mind in tech.
When people with barely any understanding teach others, it creates a chain reaction of ignorance, a game of telephone where the truth gets distorted at every step. People build things they don’t understand, repeat concepts they never verified, and believe in systems they never researched.
This kind of malpractice doesn’t just slow progress , it rewires our thinking. It trains us to shortcut the hard stuff, to chase hype, and to outsource our critical thinking to influencers. Over time, it makes us stupid.
The result? A mass of professionals and learners who confuse marketing for mastery.
Who thinks prompt engineering is AI? Who believes every no-code tool is a breakthrough? Who doesn’t know the difference between a neural network and a nested if-else?
And it’s not their fault entirely, they’re being manipulated.
There’s a business model behind this chaos. Hype sells. Fake projects go viral. Glossy LinkedIn posts about imaginary AI startups generate leads. Courses promise to make you an expert in 10 days, just pay now.
It creates FOMO(fear of missing out), a lot.
And so, the market gets flooded with noise, and people stop asking the most important question:
Is this even real? Is this even for me?
So how do you protect yourself?
How do you navigate a tech terrain where confidence is often louder than competence?
Here are a few simple, but powerful, habits to help you filter noise from knowledge:
1. Always trace the source.
Before you believe a claim, ask: Where did this come from? Is there a paper, a github repository, a real-world use case behind it? Can you trace it back to someone credible, or is it just someone paraphrasing someone who paraphrased someone?
If there's no link to real work, it's just words.
2. Follow knowledge, not noise.
The best minds in AI are often the quietest. They share insights, not slogans. Instead of following the loudest “AI creator” on TikTok, follow the people writing papers, building open-source tools, or working on real products.
Look for depth, not virality.
Don’t go for the numbers, if somebody has a lot of followers, sometimes means nothing.
3. Ask dumb questions.
Seriously. Ask “why does this work?”, “how does this scale?”, “what are the limitations?” If the so-called expert can’t answer those without deflecting, you’ve probably caught a red flag.
Real experts welcome questions, they don’t hide behind buzzwords.
4. Respect the hard stuff.
Good AI is hard. It’s built on years of math, statistics, software engineering, and system design.
If someone makes it all sound easy, they’re selling something , usually a course.
Don’t let anyone tell you that understanding is optional. It’s not.
5. Build your research muscle.
Before buying or believing anything, take 15 minutes to dig.
Google the person. Look for GitHub links, blog posts, citations, real-world products.
If their entire portfolio is just carousels and Canva posts, you already know what you’re buying: hot air.
Let’s not sugarcoat it: this AI gold rush is warping our minds.
When we reward oversimplification, when we celebrate shortcuts, when we mistake marketing for intelligence, we lose our sharpness. Slowly. Silently.
And that’s the real malpractice.
It’s time to stop glorifying “easy” and start respecting the process. Because intelligence isn’t just about building AI, it’s about not becoming stupid in the process.
And to all the self-proclaimed AI “experts” and “influencers” out there:
Please.
Think before you inject false information into the world. Before you build fake projects for engagement. Before you sell the illusion of expertise to people who trust you. Before you create FOMO-driven hype that pushes others into the same trap.
Because one day, you might have children. And they’ll grow up in the world you’re distorting , a world where shortcuts are praised, critical thinking is rare, and truth is buried under branding.
Make sure you’re not the reason they stop thinking for themselves.
I'm really stoked to read this from you, Alex, especially the last part:
<<Because one day, you might have children. And they’ll grow up in the world you’re distorting , a world where shortcuts are praised, critical thinking is rare, and truth is buried under branding.>>
Someone had to talk about the elephant in the room. And who better than Alex Vesa? Great article man.