- Dit onderwerp bevat 3 reacties, 1 deelnemer, en is laatst geüpdatet op 15 juli 2023 at 20:34 door Anna Krasko.
15 september 2022 om 22:39 #10270Anna Krasko@akrasko97
The Artificial Intelligence Act – EU (sept 2022) – gespot op de site van Yfke https://www.yfkelaanstra.com/
Wat is de EU AI-wet?
De AI-wet is een voorgestelde Europese wet op kunstmatige intelligentie (AI) – de eerste wet over AI door een grote regelgever waar dan ook. De wet kent toepassingen van AI toe aan drie risicocategorieën. Ten eerste worden toepassingen en systemen die een onaanvaardbaar risico vormen, zoals door de overheid beheerde sociale scores van het type dat in China wordt gebruikt, verboden. Ten tweede zijn risicovolle sollicitaties, zoals een cv-scantool die sollicitanten rangschikt, onderworpen aan specifieke wettelijke vereisten. Ten slotte worden toepassingen die niet expliciet zijn verboden of als risicovol worden vermeld, grotendeels ongereguleerd gelaten.
Waarom zou het je iets kunnen schelen?
AI-toepassingen beïnvloeden welke informatie u online ziet door te voorspellen welke inhoud u aanspreekt, gegevens van gezichten vast te leggen en te analyseren om wetten te handhaven of advertenties te personaliseren, en worden gebruikt om kanker te diagnosticeren en te behandelen. Met andere woorden, AI beïnvloedt veel delen van je leven.
Net als de Algemene Verordening Gegevensbescherming (AVG) van de EU in 2018, zou de EU AI Act een wereldwijde norm kunnen worden, die bepaalt in hoeverre AI een positief in plaats van negatief effect heeft op uw leven, waar u ook bent. De AI-regelgeving van de EU maakt internationaal al furore. Eind september heeft het Braziliaanse congres een wetsvoorstel aangenomen dat een wettelijk kader creëert voor kunstmatige intelligentie.
Hoe kan het verbeterd worden?
Er zijn verschillende mazen en uitzonderingen in de voorgestelde wet. Deze tekortkomingen beperken het vermogen van de wet om ervoor te zorgen dat AI een positieve factor in uw leven blijft. Momenteel is gezichtsherkenning door de politie bijvoorbeeld verboden, tenzij de beelden met vertraging worden vastgelegd of de technologie wordt gebruikt om vermiste kinderen te vinden.
Bovendien is de wet niet flexibel. Als over twee jaar een gevaarlijke AI-toepassing wordt ingezet in een onvoorziene sector, biedt de wet geen mechanisme om deze als ‘hoog risico’ te bestempelen. Zie deze pagina voor meer gedetailleerde analyses.
What is the EU AI Act?
The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
Why should you care?
AI applications influence what information you see online by predicting what content is engaging to you, capture and analyse data from faces to enforce laws or personalise advertisements, and are used to diagnose and treat cancer. In other words, AI affects many parts of your life.
Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you may be. The EU’s AI regulation is already making waves internationally. In late September, Brazil’s Congress passed a bill that creates a legal framework for artificial intelligence.
How can it be improved?
There are several loopholes and exceptions in the proposed law. These shortcomings limit the Act’s ability to ensure that AI remains a force for good in your life. Currently, for example, facial recognition by the police is banned unless the images are captured with a delay or the technology is being used to find missing children.
In addition, the law is inflexible. If in two years’ time a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as “high-risk”. For more detailed analyses, please see this page.
26 januari 2023 om 18:21 #11814Anna Krasko@akrasko97
A conversation between Lynne McTaggart and Greg Braden – humanity, technology, our true potential
Watch the amazing conversation between Lynne McTaggart and Gregg Braden about the ‘transhuman’ movement and why we already have latent human capacities more amazing than any AI could ever be. (ca 1 hour)
~~~~ ~~~~ Thema’s waarover gesproken wordt:
transhumanisme en upgrade via de technologie, welke waarde hechten we aan ons Mens-zijn aan het (opnieuw leren) gebruiken van ons menselijk potentieel, Neurolink van Elon Musk, inzetten van intentie
6 mei 2023 om 21:03 #13414Anna Krasko@akrasko97
Elon Musk FULL INTERVIEW with Tucker Carlson: Al, Open AI, TruthGPT, Twitter – https://www.youtube.com/watch?v=a2ZBEC16yH4
- Waarom is het belangrijk om AI en SUPER AI te reguleren?
- Wat zegt Elon Musk over informatie / bedrijven die open blijven en/ of in gesloten systemen veranderen
- Twitter en het belang van vrij gesproken woord, inmenging en beïnvloeding van verschillende (overheids)organisaties in bv Twitter
Korte versie (11:09)
15 juli 2023 om 20:34 #13951Anna Krasko@akrasko97
Linking Chips With Light For Faster AI – Chips nog sneller laten communiceren met lichtgolven
(IEEE Spectrum April 2023 https://spectrum.ieee.org/photonics-and-ai#toggle-gdpr )
Cass: Okay, great. So you talked about there are two companies that are in this sort of race to put light inside computers. So we can talk a little bit? Who are they, and what are their different approaches?
Moore: Sure, these are two startups, and they’re not alone. There are very likely other startups in stealth mode, and there are giants like Intel that are also in this race as well. But what these two startups, Ayar Labs, that’s A-Y-A-R—and I’m probably pronouncing it a little weird—and Avicena, those are the two that I profiled in the January issue. And they’re representative of two very different sort of takes on this same idea. Let me start with Ayar, which is really sort of the— it’s sort of what we’re using right now but on steroids. Like the links that you find already in data centers, it uses infrared laser light, kind of breaks it into several bands. I can’t remember if it’s 8 or 16, but so they’ve got multiple channels kind of in each fiber. And it uses silicon photonics to basically modulate and detect the signals. And what they bring to the table is they have, one, a really good laser that can sit on a board next to the chip, and also they’ve managed to shrink down the silicon photonics, the modulation and the detection and the associated electronics that makes that actually happen, quite radically compared to what’s out there right now. So really they are sort of just— I mean, it’s weird to call them a conservative play because they really do have great technology, but it is just sort of taking what we’ve got and making it work a lot better.
Avicena is doing something completely different. They aren’t using lasers at all. They’re using microLEDs, and they’re blue. These are made of gallium nitride. And why this might work is that there is a rapidly growing microLED display industry with big backers like Meta and Apple. So the problems within that you might find with a new industry are kind of getting solved by other people. And so what Avicena does is they basically make a little microLED display on a chiplet, and they stick a particular kind of fiber. It’s sort of like an imaging fiber. It’s similar to if you’ve ever had an endoscopy exam, you’ve had a close encounter with one of these. And basically, it has a bunch of fiber channels in it. The one that they use has like 300 in this half a millimeter channel. And they stick the end of that fiber on top of the display so that each microLED in the display has its own channel. And so you have this sort of parallel path for light to come off of the chip. And they modulate the microLEDs, just flicker them. And they found a way to do that a lot faster than other people. People thought they were going to be real hard limits to this. But they’ve gotten as high as ten gigabits per second. Their first product will probably be in the three gigabytes– gigabits, sorry, kind of area, but it’s really surprisingly rapid. People weren’t thinking that microLEDs could do this, but they can. And so that should provide a very powerful pathway between microprocessors.
Cass: So what’s the market for this technology? I mean, I presume we’re not looking to see it in our phones anytime soon. So who really is spending the money for this?
Moore: It’s funny you should mention phones—and I’ll get back to it—because it’s definitely not the first adopter, but there may actually be a role for it in there. Your likely first adopter are actually companies like Nvidia, which I know are very interested in this sort of thing. They are trying to tie together their really super powerful GPUs as tightly as possible so that they can— in the end, ideally, they want something that will bind their chips together so tightly that it’s as if it was one gigantic chip. Even though it’s physically spread across eight racks with each server having four or eight of these chips. So that’s what they’re looking for. They need to reduce the distance, both in energy and in sort of time, to their other processor units and to and from memory so that they kind of wind up with this really tightly bound computing machine. And when I say tightly bound, the ideal is to bind them all together as one. But the truth is the way people use computing resources, what you want to do is just pull together what you need. And so this is a technology that will allow them to do that.
So it’s really the big iron people that are going to be the early adopters for this sort of thing. But in your phone, there’s actually a sort of bandwidth-limited pathway between your camera and the processor. And Avicena in particular is actually kind of interested in putting these together, which would mean that your camera can be in a different place than it is right now with regard to the processor. Or you could come up with completely different configurations of a mobile device.
. . .
Cass: Because that’s something you’ve reported before on the challenge of integrating photonics with silicon so you don’t have to go off-chip. But there’s kind of been a long and somewhat—don’t want to say troubled—but a challenging history there.
Moore: Yeah, and the reason it’s become suddenly less challenging, actually, is that the world is moving towards chiplets, as opposed to monolithic silicon system on chips. So even just a few years ago, everybody was just making the biggest chip they could, filling it up. Moore’s Law has been not delivering, you know, quite as much as it has in the past.
And so there’s a new solution. You can add silicon by finding a way to bind two separate pieces of silicon together almost as tightly as if they were one chip. And this is a packaging technology. Packaging is something that people didn’t really care about so much 10 years ago, but now it’s actually super important. So there’s 3D-packaging-type situations where you’ve got chips stacked on chips. You’ve got what are called 2-and-a-half-D, which is really— it’s 2D. But they’re within less than a millimeter of each other, and the number of connections that you can make at that scale is much closer to what you have on the chip. And then so you put these chiplets of silicon together, and you package them all in one. And that is sort of the way advanced processors are being made right now. One of those chiplets, then, can be silicon photonics, which is a completely different— it’s a different manufacturing process than you would have for your main processor and stuff. And because of these packaging technologies, you can put chips made with different technologies together and sort of bind them electrically, and they will work just fine. And so because there is this sort of chiplet landing pad now, companies like Avicena and Ayar, they have a place to go that’s kind of easy to get to.
Cass: So you mentioned Nvidia and GPUs there, which are really now associated with sort of machine learning. So is that’s what’s driving a lot of this is these machine learning, deep learning things that are just chewing through enormous amounts of data?
Moore: Yeah, the real driver is that things like ChatGPT and all of these natural language processors, which are sort of a class that are called transformer neural networks. I’m a little unclear as to why, but they are just huge. They have just ridiculous, trillions of parameters like the weights and the activations that actually sort of make up the guts of a neural network. And there’s, unfortunately, sort of no end in sight. It seems like if you just make it bigger, you can make it better. And in order to train these— so it’s not the actual— it’s not so much the running of the inferencing, the getting your answer, it’s the training them that is really the problem. In order to train something that big and have it done this year, you really need a lot of computing power. That was sort of‑ that was the reason for companies like Cerebras where instead of something taking weeks, taking hours, or instead of something taking months and months, taking it a couple of days means that you can actually learn to use and train one of these giant neural networks in a reasonable amount of time and frankly, do experiments so that you can make better ones. I mean, if your experiment takes four months, it really slows down the pace of development. So that’s a real driver is training these gigantic transformer models.
Cass: So what kind of time frame are we talking about in terms of when might we see these kind of things popping up in data centers? And then, I guess, when might we see them coming to our phone?
Moore: Okay, so I know that Ayar Labs, that’s the startup that uses the infrared lasers, is actually working on prototype computers with partners this year. It’s unlikely that we will actually see the results of those from them. They’re just not likely to be made public. But when pressed, 2025-’26 kind of time frame, the CEO of Ayar thought was an okay estimate. It might take a little longer for others.
. . .
- Je moet ingelogd zijn om een antwoord op dit onderwerp te kunnen geven.