The recent announcement of Apple’s partnership with OpenAI has created quite a stir in the tech world. But one person who is not happy about this collaboration is none other than Elon Musk.

Musk, the CEO of Tesla and SpaceX, has been known for his outspoken criticism of artificial intelligence (AI) and its potential dangers. He even co-founded OpenAI in 2015 as a non-profit research company to ensure that AI is developed responsibly with safety measures in place.

So when Apple announced their partnership with OpenAI, it didn’t sit well with Musk. In a tweet on Monday, he expressed his dissatisfaction by saying, “At Tesla, using anything other than our own AI software for critical applications would be considered suicide.”

He then went on to say that if Apple’s AI proves to be better than Tesla’s, he will switch to using it. But until then, he threatened to ban iPhones and MacBooks at his companies.

This is not the first time Musk has criticized Apple and their products. In 2022, he called the company the “Tesla graveyard” because many of their employees have left to join Tesla.

But this recent threat from Musk raises an important question – how reliant are companies on AI technology?

AI has become an integral part of our daily lives, from personalized recommendations on social media to self-driving cars. And with the rapid advancement and integration of AI, it is no surprise that tech giants like Apple and Tesla are investing heavily in this field.

But at what cost? Musk’s concerns about the potential dangers of AI cannot be ignored. In a recent interview, he warned that unchecked development of AI could lead to an “immortal dictator” ruling the world.

While these may seem like extreme scenarios, they cannot be disregarded completely. The responsibility falls not only on tech companies, but also on governments and regulatory bodies to ensure that AI is developed ethically and with proper safety measures in place.

In the meantime, it remains to be seen if Musk’s threat will have any impact on Apple’s partnership with OpenAI. But one thing is for sure – this issue has once again brought the discussion of responsible AI development into the spotlight.

Reference: Newsbreak

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

TikTok Notifications Not Working? Here’s Fix!

With its innovative and entertaining features, TikTok has quickly become a global…

Microsoft’s Alterations to the Xbox Console Outrage Republicans.

Microsoft has announced modifications to the settings of its Xbox system that…

What is DRAM (Dynamic Random Access Memory)

DRAM, or dynamic random access memory, is a semiconductor memory used to…

Google Achieves a Significant ‘Milestone’ in Quantum Computing Usability

Google scientists announced on Wednesday that they have reached a significant milestone…