Last month, tech outlet The Information reported that OpenAI and its competitors are switching strategies because the pace of AI improvement has slowed dramatically. For a long time, you have been able to make AI systems dramatically better for a wide range of tasks Just by making them bigger.




Why does this matter? All kinds of problems that were once thought to require complicated customized solutions turned out to crumble due to the larger scale. We have applications like OpenAI’s ChatGPT because of scaling laws. If so no longer truethen the future of AI development will look very different – ​​and potentially much less optimistic – than the past.




This reporting was greeted with a choir from “I told you so” from AI skeptics. (I’m not inclined to give them too much credit, since many of them predicted at least twenty of the last two AI delays.) But it was harder to get a sense of what AI researchers thought about it.




Over the past few weeks, I’ve pressed a number of AI researchers from academia and industry on whether they thought The Information’s story captured real momentum – and if so, how it could shape the future of AI in the future would change.




The general answer I’ve heard is that we should probably expect AI’s impact to increase, not decrease, in the coming years, regardless of whether naive scaling actually slows. That’s effective because when it comes to AI, we already have a huge amount of impact waiting to happen.




There are already powerful systems available that can do a lot of commercially valuable work. It’s just that no one has fully grasped many of the commercially valuable applications, let alone put them into practice.




It lasted decades from the birth of the Internet to transform the world, and it may take decades before AI too (Maybe – many people are at the forefront of this world still very persistent that in a few years our world will be unrecognizable.)




The bottom line: If greater scale no longer gives us greater returns, that’s a major problem with serious consequences for how the AI ​​revolution will evolve, but it’s no reason to declare the AI ​​revolution canceled.




Most people hate AI, but underestimate it a bit





Here’s something those in the artificial intelligence bubble may not realize: AI is not a popular new technology, and in fact it is becoming increasingly popular. less popular over time.




I’ve written that I think it poses extreme risks, and so do many Americans agree with mebut also many people don’t like it in a much more mundane way.




The most visible consequences so far are unpleasant and frustrating. Google Image results are full of them terrible low quality AI slop instead of the cool and varied works of art that used to appear. Teachers can’t really allocate no more taking home essays because AI written work is so widespread, while in turn many students have been wrongly accused of using AI when they weren’t because AI detection tools actually terrible. Artists and writers are furious about using our work to train models who will then take our jobs.




Much of this frustration is very justified. But I think there’s an unfortunate tendency to confuse “AI sucks” with the idea that “AI isn’t that useful.” The question “what is AI good for?” is popular, even though the answer is basically that AI is already good for a lot of things and new applications are being developed at a breathtaking pace.




I think that our frustration with AI sloppiness and the carelessness with which AI has been developed and deployed can sometimes lead to underestimating AI as a whole. Lots of people eagerly attacked on the news that OpenAI and competitors are struggling to make the next generation of models even better, and took it as evidence that the AI ​​wave was all hype and will be followed by bitter disappointment.




Two weeks later OpenAI announced the latest generation of models, and yes, they are better than ever. (One caveat: It’s hard to say how much of the improvement comes from scale, as opposed to the many other possible sources of improvement, so this doesn’t mean the initial information reporting was wrong).




It’s okay to hate AI. But it’s a bad idea to underestimate it. And it’s bad practice to view every hiccup, setback, limitation, or technical challenge as a reason to expect our world’s AI transformation to stall—or even slow down.




Instead, I think the better way to think about this is that there will definitely be an AI-driven transformation of our world at this point. Even if larger models than those that currently exist are never trained, existing technology is sufficient for large-scale disruptive change. And quite often when a limitation appears, it is prematurely declared completely persistent… and then resolved in the short term.




After a few rounds of this particular dynamic, I’d like to see if we can cut it off at the pass. Yes, several technological challenges and limitations are real, and they are driving strategic changes in the major AI labs and determining how progress will play out in the future. No, the final challenge doesn’t mean the AI ​​wave is over.




AI is here to stay and the response to it must mature beyond the wish for it to disappear.




A version of this story originally appeared in the Future Perfect newsletter. Register here!





Sitemap of 6so5iwv7yq.3stairs.online

Cannabis News RSS Feeds: Weedrss.com

1000+ unique media and news posts every 24 hours…

Published 2 hours ago
Published 7 hours ago
Published 13 hours ago
Published 16 hours ago
Published 17 hours ago
Published 17 hours ago
Published 20 hours ago
Published 21 hours ago
Published 21 hours ago
Published 21 hours ago
Published 22 hours ago