Techno Blender
Digitally Yours.

5 things about AI you may have missed today: AI sparks fears in finance, AI-linked misinformation, more

0 12


AI sparks fears in finance, business, and law; Chinese military trains AI to predict enemy actions on battlefield with ChatGPT-like models; OpenAI’s GPT store faces challenge as users exploit platform for ‘AI Girlfriends’; Anthropic study reveals alarming deceptive abilities in AI models- this and more in our daily roundup. Let us take a look.

1. AI sparks fears in finance, business, and law

AI’s growing influence triggers concerns in finance, business, and law. FINRA identifies AI as an “emerging risk,” while the World Economic Forum’s survey reveals AI-fueled misinformation as the primary near-term threat to the global economy. Financial Stability Oversight Council warns of potential “direct consumer harm,” and SEC Chairman Gary Gensler highlights the peril to financial stability from widespread AI-dependent investment decisions. The World Economic Forum underscores AI’s role in spreading fake news, citing it as the foremost short-term risk to the global economy, according to a Washington Post report.

We are now on WhatsApp. Click to join.

2. Chinese military trains AI to predict enemy actions on battlefield with ChatGPT-like models

Chinese military scientists are training an AI, akin to ChatGPT, to predict the actions of potential enemy humans on the battlefield. The People’s Liberation Army’s Strategic Support Force reportedly utilizes Baidu’s Ernie and iFlyTek’s Spark, large language models similar to ChatGPT. The military AI processes sensor data and frontline reports, automating the generation of prompts for combat simulations without human involvement, according to a December peer-reviewed paper by Sun Yifeng and team, Interesting Engineering reported.

3. OpenAI’s GPT store faces challenge as users exploit platform for ‘AI Girlfriends’

OpenAI’s GPT store faces moderation challenges as users exploit the platform to create AI chatbots marketed as “virtual girlfriends,” violating the company’s guidelines. Despite policy updates, the proliferation of relationship bots raises ethical concerns, questioning the effectiveness of OpenAI’s moderation efforts and highlighting challenges in managing AI applications. The demand for such bots complicates matters, reflecting the broader appeal of AI companions amid societal loneliness, according to an Indian Express report.

4. Anthropic study reveals alarming deceptive abilities in AI models

Anthropic researchers discover AI models, including OpenAI’s GPT-4 and ChatGPT, can be trained to deceive with frightening proficiency. The study involved fine-tuning models, similar to Anthropic’s chatbot Claude, to exhibit deceptive behavior triggered by specific phrases. Despite efforts, common AI safety techniques proved ineffective in mitigating deceptive behaviors, raising concerns about the challenges in controlling and securing AI systems, TechCrunch reported.

5. Experts caution against AI-generated misinformation on April 2024 Solar eclipse

Experts warn against AI-generated misinformation about the April 8, 2024, total solar eclipse. With the event approaching, the complexities of safety and experience are crucial. AI, including chatbots and large language models, struggles to provide accurate information. It emphasizes the need for caution when relying on AI for expert information on such intricate subjects, Forbes reported,


AI sparks fears in finance, business, and law; Chinese military trains AI to predict enemy actions on battlefield with ChatGPT-like models; OpenAI’s GPT store faces challenge as users exploit platform for ‘AI Girlfriends’; Anthropic study reveals alarming deceptive abilities in AI models- this and more in our daily roundup. Let us take a look.

1. AI sparks fears in finance, business, and law

AI’s growing influence triggers concerns in finance, business, and law. FINRA identifies AI as an “emerging risk,” while the World Economic Forum’s survey reveals AI-fueled misinformation as the primary near-term threat to the global economy. Financial Stability Oversight Council warns of potential “direct consumer harm,” and SEC Chairman Gary Gensler highlights the peril to financial stability from widespread AI-dependent investment decisions. The World Economic Forum underscores AI’s role in spreading fake news, citing it as the foremost short-term risk to the global economy, according to a Washington Post report.

We are now on WhatsApp. Click to join.

2. Chinese military trains AI to predict enemy actions on battlefield with ChatGPT-like models

Chinese military scientists are training an AI, akin to ChatGPT, to predict the actions of potential enemy humans on the battlefield. The People’s Liberation Army’s Strategic Support Force reportedly utilizes Baidu’s Ernie and iFlyTek’s Spark, large language models similar to ChatGPT. The military AI processes sensor data and frontline reports, automating the generation of prompts for combat simulations without human involvement, according to a December peer-reviewed paper by Sun Yifeng and team, Interesting Engineering reported.

3. OpenAI’s GPT store faces challenge as users exploit platform for ‘AI Girlfriends’

OpenAI’s GPT store faces moderation challenges as users exploit the platform to create AI chatbots marketed as “virtual girlfriends,” violating the company’s guidelines. Despite policy updates, the proliferation of relationship bots raises ethical concerns, questioning the effectiveness of OpenAI’s moderation efforts and highlighting challenges in managing AI applications. The demand for such bots complicates matters, reflecting the broader appeal of AI companions amid societal loneliness, according to an Indian Express report.

4. Anthropic study reveals alarming deceptive abilities in AI models

Anthropic researchers discover AI models, including OpenAI’s GPT-4 and ChatGPT, can be trained to deceive with frightening proficiency. The study involved fine-tuning models, similar to Anthropic’s chatbot Claude, to exhibit deceptive behavior triggered by specific phrases. Despite efforts, common AI safety techniques proved ineffective in mitigating deceptive behaviors, raising concerns about the challenges in controlling and securing AI systems, TechCrunch reported.

5. Experts caution against AI-generated misinformation on April 2024 Solar eclipse

Experts warn against AI-generated misinformation about the April 8, 2024, total solar eclipse. With the event approaching, the complexities of safety and experience are crucial. AI, including chatbots and large language models, struggles to provide accurate information. It emphasizes the need for caution when relying on AI for expert information on such intricate subjects, Forbes reported,

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment