We're Screwed - Suno v4

Damn. So what does this mean with quantum computing around the corner?
great question.

quantum is going to exponentially increase the power of AI into unexpected levels, but it will take awhile for the models, structures, rules, etc., to be created for it... I suspect eventually, it will do things on it's own that we won't even realize it's doing. we've seen that in generative AI take information and training and draw it's own conclusions, thoughts, world view...more like an opinion, including biases and hallucinations. Quantum may be what creates true self-aware AI, and we may not even know it is happening.
 
great question.

quantum is going to exponentially increase the power of AI into unexpected levels, but it will take awhile for the models, structures, rules, etc., to be created for it... I suspect eventually, it will do things on it's own that we won't even realize it's doing. we've seen that in generative AI take information and training and draw it's own conclusions, thoughts, world view...more like an opinion. Quantum may be what creates true self-aware AI, and we may not even know it is happening.
holy cow that is informative and scary man. People have no idea what we're in for. Thanks for the expertise, and I don't feel better. :LOL:
 
  • Like
Reactions: rsm
one thing he admitted but quickly glossed over in the video I posted is error rates. quantum computing has the ability to eventually outstrip our ability to keep up, and our understanding of what it is doing, learning, remembering, "thinking" and hiding. Small errors may not be noticed, and can have big impacts long after they enter the system, or large numbers of small errors over periods of time, etc. may corrupt or pollute the system and go unnoticed.
 
one thing he admitted but quickly glossed over in the video I posted is error rates. quantum computing has the ability to eventually outstrip our ability to keep up, and our understanding of what it is doing, learning, remembering, "thinking" and hiding. Small errors may not be noticed, and can have big impacts long after they enter the system, or large numbers of small errors over periods of time, etc. may corrupt or pollute the system and go unnoticed.
oh man, I work with backend systems, I completely understand the "little error" syndrome. This is getting scarier by the minute, that's telling me we won't even have the "capability" to monitor these small corruptions. I know what they can do longterm. Ugh
 
I thought Orwell was bad, we are heading right for damn Warhammer

I think ethics, governance and accountability are going to be the real challenge. Not everyone is going to agree on ethics, some may even intentionally subvert them...

back in the early days (1990), I developed a neural network for fraud detection and had it at about a 90%+ success rate. However, it was not easy to explain how the program came up with its answer, which required understanding of how the underlying mathematical calculations worked, and the training data, etc. This is known as "explainability" which was non-existent then. The project did not get deployed into production...

Since then there have been many efforts to explain neural network decisions, so it has improved.


Now, take that to quantum computing, we are going to get left in the dust as it out processes and outthinks us, without some way of knowing what it knows and is doing, external to it...like a conscience wrapper that can assess, question, approve, deny what a quantum AI does
 
I think ethics, governance and accountability are going to be the real challenge. Not everyone is going to agree on ethics, some may even intentionally subvert them...

back in the early days (1990), I developed a neural network for fraud detection and had it at about a 90%+ success rate. However, it was not easy to explain how the program came up with its answer, which required understanding of how the underlying mathematical calculations worked, and the training data, etc. This is known as "explainability" which was non-existent then. The project did not get deployed into production...

Since then there have been many efforts to explain neural network decisions, so it has improved.
love him or hate him, the only high level person I've heard talk about ethics, governance, and accountability regarding AI is Elon. Maybe they should put him in charge of some AI dept, and let Vivek run the DoGE.
 
love him or hate him, the only high level person I've heard talk about ethics, governance, and accountability regarding AI is Elon. Maybe they should put him in charge of some AI dept, and let Vivek run the DoGE.

ethics and governance are big focus areas at my company. doing it costs more, and slows development down...so we're working on making it better, cheaper and faster to use...and a requirement for certain applications...but it's going to take government legislation and oversight to enforce compliance.

someone will try to get around it...as long as it's not running the air traffic control system or intensive care units at a hospital, etc... we should be fine :doh:
 
back in the early days (1990), I developed a neural network for fraud detection and had it at about a 90%+ success rate. However, it was not easy to explain how the program came up with its answer, which required understanding of how the underlying mathematical calculations worked, and the training data, etc. This is known as "explainability" which was non-existent then. The project did not get deployed into production...

Since then there have been many efforts to explain neural network decisions, so it has improved.


Now, take that to quantum computing, we are going to get left in the dust as it out processes and outthinks us, without some way of knowing what it knows and is doing, external to it...like a conscience wrapper that can assess, question, approve, deny what a quantum AI does
wicked regarding the neural network. My friend worked in early AI in game development, so I have a basic understanding.
 
wicked regarding the neural network. My friend worked in early AI in game development, so I have a basic understanding.


I worked on a few neural networks solutions long ago, as I found it more interesting than the rules-based expert systems that were popular back then though I did those too.

Another neural network I worked on was for filtering sonar data for submarines to help "see" what was out there. The results could be added like a lens over the sonar data to visualize both what the sonar saw and the neural net thought it was seeing.
 
I worked on a few neural networks solutions long ago, as I found it more interesting than the rules-based expert systems that were popular back then though I did those too.

Another neural network I worked on was for filtering sonar data for submarines to help "see" what was out there. The results could be added like a lens over the sonar data to visualize both what the sonar saw and the neural net thought it was seeing.
that is super cool man. any chance you could consult on the drones? :LOL:
 
  • Haha
Reactions: rsm
Back
Top