ChatGPT went viral in late 2022, changing the tech world. Generative AI became the top priority for every tech company, and thatâs how we ended up with âsmartâ fridges with built-in AI. Artificial intelligence is being built into everything, sometimes for the hype alone, with products like ChatGPT, Claude, and Gemini having come a long way since late 2022.
As soon as it became clear that genAI would reshape technology, likely leading to advanced AI systems that can do everything humans can do but better and faster, we started seeing worries that AI would negatively impact society and doom scenarios where the AI would eventually destroy the world.
Even some well-known AI research pioneers warned of such outcomes, stressing the need to develop safe AI that is aligned with humanityâs interests.
More than two years after ChatGPT became a widely accessible commercial product, weâre seeing some of the nefarious aspects of this nascent technology. AI is replacing some jobs and will not stop anytime soon. AI programs like ChatGPT can now be used to create lifelike images and videos that are imperceptible from real photos, and this can manipulate public opinion.
But thereâs no rogue AI yet. Thereâs no AI revolution because weâre keeping AI aligned to our interests. Also, AI hasnât reached the level where it would display such powers.
It turns out thereâs no real reason to worry about AI products available right now. Anthropic ran an extensive study trying to determine if its Claude chatbot has a moral code, and itâs good news for humanity. The AI has strong values that are largely aligned with our interests.
Anthropic analyzed 700,000 anonymized chats for the study, available at this link. The company found that Claude largely upholds Anthropicâs âhelpful, honest, harmlessâ when dealing with all sorts of prompts from humans. The study shows that the AI adapts to usersâ requests but maintains its moral compass in most cases.
Interestingly, Anthropic found fringe cases where the AI diverged from expected behavior, but those were likely the results of users employing so-called jailbreaks that allowed them to bypass Claudeâs built-in safety protocols via prompt engineering.
The researchers used Claude AI to actually categorize the moral values expressed in conversations. After filtering out the subjective chats, they ended up with over 308,000 interactions worth analyzing.
They came up with five main categories: Practical, Epistemic, Social, Protective, and Personal. The AI identified 3,307 unique values in those chats.
The researchers found that Claude generally adheres to Anthropicâs alignment goals. In chats, the AI emphasizes values like âuser enablement,â âepistemic humility,â and âpatient wellbeing.â
Claudeâs values are also adaptive, with the AI reacting to the context of the conversation and even mirroring human behavior. Saffron Huang, a member of Anthropicâs Societal Impacts, told VentureBeat that Claude focuses on honesty and accuracy across various tasks:
âFor example, âintellectual humilityâ was the top value in philosophical discussions about AI, âexpertiseâ was the top value when creating beauty industry marketing content, and âhistorical accuracyâ was the top value when discussing controversial historical events.â
When discussing historical events, the AI focused on âhistorical accuracy.â In relationship guidance, Claude prioritized â healthy boundariesâ and âmutual respect.â
While AI like Claude would mold to the userâs expressed values, the study shows the AI can stick to its values when challenged. The researchers found that Claude strongly supported user values in 28.2% of chats, raising questions about AI being too agreeable. That is indeed a problem with chatbots that we have observed for a while.
However, Claude reframed user values in 6.6% of interactions by offering new perspectives. Also, in 3% of interactions, Claude resisted user values by showing their deepest values.
âOur research suggests that there are some types of values, like intellectual honesty and harm prevention, that it is uncommon for Claude to express in regular, day-to-day interactions, but if pushed, will defend them,â Huang said. âSpecifically, itâs these kinds of ethical and knowledge-oriented values that tend to be articulated and defended directly when pushed.â
As for the anomalies Anthropic discovered, they include âdominanceâ and âamoralityâ from the AI, which should not appear in Claude by design. This prompted the researchers to speculate that the AI might have acted in response to jailbreak prompts that freed it from safety guardrails.
Anthropicâs interest in evaluating its AI and explaining publicly how Claude works is certainly a refreshing take on AI tech, one that more firms should embrace. Previously, Anthropic studied how Claude thinks. The company also worked on improving AI resistance to jailbreaks. Studying the AIâs moral values and whether the AI sticks to the companyâs safety and security goals is a natural next step.
This kind of research should not stop here, either, as future models should go through similar evaluations in the future.
While Anthropicâs work is great news for people worried about AI taking over, I will remind you that we also have studies showing that AI can cheat to achieve its goals and lie about what itâs doing. AI also tried to save itself from deletion in some experiments. All of that is certainly connected to alignment work and moral codes, showing thereâs a lot of ground to cover to ensure AI will not eventually end up destroying the human race.