I'd say we go back to the proposed idea of having nuclear launch codes in someone's chest cavity. That's a good way to keep a human always "in the loop" , but in all seriousness it seems kinda ironic that possibly one of the safest options would be to "analoguize" (in case it isn't already I'm.not knowledgeable enough) most of the chain …
I'd say we go back to the proposed idea of having nuclear launch codes in someone's chest cavity. That's a good way to keep a human always "in the loop" , but in all seriousness it seems kinda ironic that possibly one of the safest options would be to "analoguize" (in case it isn't already I'm.not knowledgeable enough) most of the chain of command and capacities of a nuclear launch. But then again misinformation inside the organization could provide a human failure in the system ... 🤷♂️
Would you consider bias in algorithms a form of human error? Would you consider all white male venture capital rooms Human error? I recently just read another Asian who is fired from a venture capital firm with the usual complaints. If the same things keep happening, and we project all of this into even more powerful artificial intelligence? I'm curious how human error will compound?
It's gender bias in Venture Capital that compounds to everything as well. It also significantly hasn't changed much in the last decade: (I know we think a like)
I was thinking of gender bias in IT itself, which includes the new beaut man made AI tools.
Don't forget that even facial recognition was initially built with a Caucasian bias, yet no one noticed or thought of it until facial recognition had trouble reading non compliant skin colours!
Gender bias in venture capital didn't create that problem, nor would more or differently balanced capital investment have prevented the problem.
Maybe A.I. can help corruption from seeping into our political, national defense and leadership arenas. Beginning to think that the game is fixed on multiple levels to profit the few. Keeping "humans in the loop" means having people in charge who are full of bias, prone to errors, able to make bad decisions and have policies that benefits a privilege few, while being to the detriment to others.
The latest regional bank failures is such an example of failed policy and regulation at scale. A.I. should be made into a custodian "intelligence" to oversee humanity. So that we don't destroy ourselves or exploit each other to the degree that we are on track to do so.
I get your point Michael, but I have issues with the concept of a "Nanny" AI keeping humanity safe from ourselves when the AI itself is coded and built by biased humans themselves. And also it all goes back to alignment when you give a utility function based on keeping humans safe how do we avoid a prison or any other "efficient" ways of keeping us safe without extinction 😂
It can get pretty dicey pretty fast. Idk I still think a possible solution could arise in the foreseeable future in the meantime it's fun to watch it all unfold.
I'd say we go back to the proposed idea of having nuclear launch codes in someone's chest cavity. That's a good way to keep a human always "in the loop" , but in all seriousness it seems kinda ironic that possibly one of the safest options would be to "analoguize" (in case it isn't already I'm.not knowledgeable enough) most of the chain of command and capacities of a nuclear launch. But then again misinformation inside the organization could provide a human failure in the system ... 🤷♂️
Human error is the cause of most unintended outcomes and accidents.
Would you consider bias in algorithms a form of human error? Would you consider all white male venture capital rooms Human error? I recently just read another Asian who is fired from a venture capital firm with the usual complaints. If the same things keep happening, and we project all of this into even more powerful artificial intelligence? I'm curious how human error will compound?
Yes, definitely.
Funny, I was going to throw in a question to ask if you're going to address the inherent gender bias of AI tools. 😁
It's gender bias in Venture Capital that compounds to everything as well. It also significantly hasn't changed much in the last decade: (I know we think a like)
https://techcrunch.com/2023/05/01/ann-lai-says-she-was-fired-from-bullpen-capital-after-helping-deliver-a-145m-fund/
I was thinking of gender bias in IT itself, which includes the new beaut man made AI tools.
Don't forget that even facial recognition was initially built with a Caucasian bias, yet no one noticed or thought of it until facial recognition had trouble reading non compliant skin colours!
Gender bias in venture capital didn't create that problem, nor would more or differently balanced capital investment have prevented the problem.
The blinkered thinking is real.
Maybe A.I. can help corruption from seeping into our political, national defense and leadership arenas. Beginning to think that the game is fixed on multiple levels to profit the few. Keeping "humans in the loop" means having people in charge who are full of bias, prone to errors, able to make bad decisions and have policies that benefits a privilege few, while being to the detriment to others.
The latest regional bank failures is such an example of failed policy and regulation at scale. A.I. should be made into a custodian "intelligence" to oversee humanity. So that we don't destroy ourselves or exploit each other to the degree that we are on track to do so.
I get your point Michael, but I have issues with the concept of a "Nanny" AI keeping humanity safe from ourselves when the AI itself is coded and built by biased humans themselves. And also it all goes back to alignment when you give a utility function based on keeping humans safe how do we avoid a prison or any other "efficient" ways of keeping us safe without extinction 😂
It can get pretty dicey pretty fast. Idk I still think a possible solution could arise in the foreseeable future in the meantime it's fun to watch it all unfold.