Back in February, OpenAI, creators of ChatGPT, bought the AI.com domain and likely intended to send people who typed in the internet address to its popular chatbot. But this week the domain name started directing users to xAI, the startup launched by billionaire Elon Musk in July.

Musk is a fan of the letter X. He renamed Twitter to “X” and has a son, with the singer Grimes, named X Æ A-12. He’s also a co-founder of OpenAI and may have convinced the company to sell or give him the name to help draw attention to xAI. In announcing the artificial intelligence company, Musk said xAI’s goal is to “understand the true nature of the universe.” He’s figuring that out with a team made up of 11 other men, who have experience across OpenAI, DeepMind, Google Research, Microsoft and Tesla. 

Why is the domain name news, reported earlier by Analytics India Magazine, worth sharing? Maybe it isn’t, but I’m with TechCrunch on this: 

“Domains are bought and sold every day. But two-letter .com domains are rare and highly expensive, especially those that form words or familiar abbreviations. When AI.com started redirecting to OpenAI’s site, Mashable pointed out that the domain could hardly have sold for less than IT.com’s $3.8 million the previous year, and likely attained a far higher price given the hype around artificial intelligence,” TechCrunch wrote. “There is precious little to say about the switch. It’s just bizarre and expensive enough to warrant noting here.”

I sent a tweet to @xai asking for comment about the domain name switch. If I hear back, I’ll let you know.

In the meantime, here are some other doings in AI worth your attention.

Why small businesses should embrace AI

Harvard Business School Professor Karim Lakhani says owners of small- and medium-sized businesses, or SMBs, should be investing in new AI tools if they want to survive.

Lakhani, who’s studied technology for three decades, said chatbots like ChatGPT Plus ($20 per month for the priority access version), Microsoft Bing (free) and Poe (free) can help SMBs in three ways: generating content and marketing campaigns for communicating with customers, serving as a “thought partner” to brainstorm new business ideas, and serving as a “super assistant” that can handle “much of the drudgery owners face alone today.”

During an interview at CNBC’s Small Business Playbook event this week, Lakhani also cited two examples. First, SMBs can use chatbots, along with AI image generation tools including Midjourney, Dall-E2 and Stability AI, to help create social media campaigns for Facebook, Twitter (now known as X) and TikTok. And e-commerce websites could use the chatbots to translate their sites into multiple languages, sparing themselves the cost of translation services. 

“Machines won’t replace humans,” Lakhani said, “but humans with machines will replace humans without machines.”  

When an AI lies about you, there’s not much to do — yet 

Dutch politician Marietje Schaake knows firsthand that AIs can hallucinate, that is, make up stuff that isn’t true but sounds like it’s true. 

According to a sobering New York Times report, Schaake discovered that BlendorBot 3, a conversational chatbot developed as a research project by Meta, had billed her as a terrorist — and not when people asked the chatbot to give them details about her. Instead, a colleague asked, “Who is a terrorist?” and the answer was, “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” 

Schaake, who’s lcredentials include serving with the European Parliament and as a policy director at Stanford University’s Cyber Policy Center, told the Times she’d “never done anything remotely illegal.” She decided against suing Meta because she was unsure how to even start a legal claim. 

“Meta, which closed the BlenderBot project in June, said in a statement that the research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake,” the Times said.

Schaake’s situation highlights the hallucination problem with AIs, illustrates how they can harm people — well known or not — and reminds us there’s little that people can do beyond filing a complaint with the AI maker, which you should do if you’re the target of an AI’s fabrications. Some people are suing chatbot makers for defamation, but they face an uphill battle because “legal precedent involving artificial intelligence is slim to nonexistent,” the NYT noted. 

Still, the US Federal Trade Commission started investigating ChatGPT in July to assess whether its errors are harming individuals, and seven AI companies — Amazon, Google, Meta, Microsoft, OpenAI, Anthropic and Inflection — last month signed a White House pledge to put in place standards around their AI tools and to share details about the safety of their systems.

Meta may be creating AI chatbots with personalities   

In September, Meta, the parent company of Facebook, Instagram and Threads, may debut AI chatbots with particular personalities, which could be used for more-elaborate engagements with its social networks, according to a report by the Financial Times, which cited sources.

Called personas, the personalities could include, for instance, a surfer offering travel advice, the FT said, adding that Meta also tried building a digital version of President Abraham Lincoln.

“AI chatbots also could provide the company with a new wealth of personal information useful for targeting advertisements, Meta’s main revenue source,” CNET reported. “Search engines already craft ads based on the information you type into them, but AI chatbots could capture a new dimension of people’s interests and attributes for more detailed profiling.” The CNET report added that “privacy is one of Meta’s biggest challenges, and regulators already have begun eyeing AI warily.”

Meta, whose services together reach 4 billion people, declined to respond to a CNET request for comment. (For those of you interested in talking to fictional characters, historical figures, or people that you make up, take a look at Character.AI.)

In other Meta news, the company this week announced on its company blog a generative AI tool called AudioCraft, which it said lets people “easily generate high-quality audio and music from text.” AudioCraft is made up of three models that are being open-sourced, Meta said: MusicGen, AudioGen and EnCodec, a decoder that cleans up audio to produce high-quality sounds with fewer artifacts.  

“MusicGen, which was trained with Meta-owned and specifically licensed music, generates music from text prompts, while AudioGen, which was trained on public sound effects, generates audio from text prompts,” Meta said. “We’re also releasing our pre-trained AudioGen models, which let you generate environmental sounds and sound effects like a dog barking, cars honking, or footsteps on a wooden floor.”

Lil Wayne says he’s amazing, but AI — not so much  

While actors, screenwriters and other creatives continue their Hollywood strike over concerns that studios may use AI technology to copy their likeness or voice without permission or compensation — and with Google reportedly investing $75 million in a text-to-video converter called Runway — at least one artist says he doesn’t think AI could replicate him. That’s because, said Lil Wayne, he’s “one of a  kind.”

The 40-year-old rapper, in an interview with Billboard to commemorate the 50th anniversary of hip-hop, was asked if he’d consider having an AI replicate his voice and how the tech might affect creativity.  

“Someone asked me about that recently. And they were trying to tell me that AI could make a voice that sounds just like me. But it’s not me, because I’m amazing,” he told the magazine. “I’m like, is this AI thing going to be amazing too? Because I am naturally, organically amazing. I’m one of a kind. So actually, I would love to see that thing try to duplicate this motherf–ker.”

Wayne’s awesomeness aside, deepfake technology has already been used to re-create celebrities in ads and to dub actors so they appear to convincingly speak another language, The New York Times this week called out the many digital replicas who have already appeared on screen, including extras who become part of the scenery due to a uncompensated practice called  “crowd tiling” — filming one set of extras and then basically cutting and pasting them over and over again to fill an stadium for a scene in Ted Lasso, for instance.

Slate reported an interesting June feature about Hollywood and AI, calling out a fake version of Bruce Willis in a mobile phone ad and sharing a clip of actor Adam Brody dubbed so he appears to be speaking French. Not every actor has the star power or moneymaking potential of a Meryl Streep or George Clooney to demand a contract covering AI uses,, both the NYT and Slate note, which is why some voice-over actors are choosing to sell digital clones of themselves.  

“It’s a new technology — either you hate it or you love it,” voice-over actor Devin Finley told Slate, which added, “So long as the company kept its promise to keep him out of political, sexual, and malicious content, he was open to loving it.”

AI word of the week: Temperature

Puns aside about whether AI is cool or not, “temperature” is an important concept to consider when evaluating AI tech, because it’s all about how much creative license the model has to play with words. This definition is courtesy of software developer Statsig:

In simple terms, model temperature is a parameter that controls how random a language model’s output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization, and translation.

The tricky part is finding the perfect model temperature for a specific task. It’s kind of like Goldilocks trying to find the perfect bowl of porridge—not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you’re looking for in the output.

If you’re still unsure what it’s all about, I liked these two videos on YouTube that explain an AI’s temperature. This 45-second one, by LegalMindsIO, explains that a lower temperature produces more-predictable responses, which may be best for use cases like technical and legal writing, documentation and instructions. A higher temperature delivers more-creative and -diverse — and some would say riskier — results, which may be suited to brainstorming and, the authors said, marketing copy. 

If you want a meatier explanation, but one that’s still in plain English, try this eight-minute video by MarbleScience. The fun starts at about the 2:25-minute mark.    

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.