Kindness. Thoughtfulness. Understanding. Charm. Patience. These are the qualities that create and sustain lifetime relationships, empower networks of people to do amazing things together, and even brighten the days of strangers. We teach our kids The Golden Rule and hope they remember that, even though it’s not always easy, being nice may be the best way to get what you want and need.

You wouldn’t know that from reading most of the stuff on the Internet.

From blog comments to Reddit posts to TikTok, you don’t have to search very hard to find the darker side of human interaction. I’m reminded of this every day when I get my email update from Nextdoor and wonder how a post about someone seeing a monarch butterfly outside their back window devolves into a political shouting match about insect rights. Of course, not everyone treats everyone else badly on the Internet, but it can seem like a less friendly environment than the good old outside world.

Does this less-kind approach more accurately reflect human nature, or is this just the way a generally more angry, more stressed world blows off steam? That’s probably the subject of a different post (probably for someone with a PhD after their name). However, it does raise a challenge when it comes to emerging technologies that are built on and of the Internet, particularly artificial intelligence.

Large-language models (LLMs), which are the basis of the most buzz-worthy innovation in AI at the moment, are built by reading the Internet. It’s nearly incomprehensible how much information is stored away on servers all over the world, but every day more information is created and more information is consumed by those building the next generation of AI models. By training AI models on human-created information, we get AI chatbots that talk like people, enabling a transcendent way to interact with computers and build creative new applications that truly change the way people work and live.

On the other hand, the pace of innovation and change has led to all sorts of imagined apocalyptic scenarios where the AI becomes smart enough to act on its own and decides it doesn’t need people anymore. While there is plenty of debate on whether and when that will happen, it’s not hard today to interact with a chatbot and see the challenges of working with a model that is built on the discourse of humans: tangents, falsehoods, overconfidence, randomness.

Of course, the existence of these qualities is exactly what makes the technology so interesting–this is not just another predictable, rules-driven program that can only do what its creator intended. No, this is more like a moody, overly certain, inexperienced teenager that can be both genius and reckless. But just like with kids, in AI these behaviors are learned. And it’s learned by watching us.

So much effort is being put into building the guardrails of these technologies to leverage the best of human knowledge while avoiding the bad parts. What if we had just been nicer in the first place? Sure, people disagree, emotion drives viewpoints, and typed communication is not always the best media… But the Internet is full of this stuff! How can anything trained on the best and worst of people not be expected to have flaws built right in?

Retroactively removing and filtering these flaws may be impossible–how does a machine know to separate good discourse from bad? I don’t think it’s too late, though, to change the diet of what these models are fed. The best part of all is that everyone can help, regardless of their technological prowess. Next time we hit reply, could we be a little nicer? Someday the world may depend on it.