One of the advantages of generative AI tech is its natural language capabilities.
This week, we have examples of both.
The product, originally called Bing Image Creator, is powered by OpenAI’s technology.
Jones said Microsoft ignored his findings despite repeated efforts to get the company to address the issues.
The second example has to do with people creating fake images with AI.
Here are the other doings in AI worth your attention.
Biden asks Congress to ban AI voice impersonations, but… That led theFederal Communications Commission in February to ban robocallsusing AI-generated voices.
The New Hampshire example definitely shows the dangers of AI-generated voice impersonations.
But do we have to ban them all?
(It also shared a clip if you want to hear it.)
And Calm clearly labeled the 45-minute story as being brought to listeners by “the wonders of technology.”
Claude 3 Sonnet, a less powerful version, is free.
But the paper also noted that the leading AI companies have been “distracted byone controversyafteranother.
They say the computer chips needed to build AI arein short supply.
And they facecountless lawsuitsover the way they gather digital data, another ingredient essential to the creation of AI.
(The New York Times has sued Microsoft and OpenAI over use of copyrighted work.)”
Not to mention that the models aren’t exactly delivering stellar results all the time.
That just reinforces what I said above: All this AI “magic” has its dark sides.
Musk argued that they wereputting profit above the future of humanityand thereby violating the founding principles of the company.
(The lawsuit, quite theread, can be found here.OpenAI’s response is here.)
“Musk is a solo-entrepreneur,” Hoffman said.
“His gut tends to be, AI is only gonna be safe if I make it …
I am the person that can make it happen versus we should bring in a collaborative group.
And I’m more with the collaborative group.”
This soap opera is far from over.
The open letter, titledA Safe Harbor for Independent AI Evaluation, focuses on three things.
First, the researchers argue that “independent evaluation is necessary for public awareness, transparency and accountability.”
Second, they claim that AI companies' current policies “chill” independent evaluation.
What are they talking about?
“OpenAI claimed in recent court documents that New York Times’s efforts tofind potential copyright violationswashackingits ChatGPT chatbot.
Afterhe highlightedhis findings, the company amended threatening language in its terms of service.”
But as we’ve seen, deciding the best way to achieve that goal remains contentious.
Now both Microsoft and OpenAI have spelled out their arguments against the paper.
We’ll see what the courts decide.