2. Tesla's accident with a semi-autonomous driving system This year, Tesla's accidents have emerged throughout the world. In May, on a highway in Florida, a Tesla in Autopilot mode crashed and the driver died. This is Tesla's first death in the world. Afterwards, Tesla made a major update to Autopilot software. In an interview with its CEO, Musk, said that this update can prevent the accident from happening again. In addition, Tesla accidents have also occurred in other countries and regions such as China. However, some of these accidents cannot be said to be directly caused by AI. 3. Microsoft's chat robot Tay spreads racism, sexism, and attacks on homosexuality. This spring, Microsoft released an artificial intelligence-driven chat robot, Tay, on Twitter, hoping that it would be pleasant for young people on the Internet. At first, Tay was designed to imitate teenagers in the United States. However, Tay was taken away shortly after the launch and became a bad guy who loves Hitler and ironically feminism. In the end, Microsoft had to “kill” Tay and announced that it would adjust the relevant algorithms. 4. Google's AI Alpha Dog AlphaGo lost a game with human Go Master Li Shishi on March 13 this year, Google Alpha Go vs. Li Shishi's man-machine warfare The fifth inning of the fourth chess game at the Four Seasons Hotel in Seoul, South Korea, Go Master Li Shishi The middle game defeated Alpha and regained one game. Although the final artificial intelligence still won with a score of 1 to 3, the lost game shows that the current AI system is still not perfect. "Perhaps Li Shishi discovered the weaknesses of Monte Carlo Tree Search (MCTS)." Toby Walsh, professor of artificial intelligence at New South Wales University, said. However, although this is seen as a failure of artificial intelligence, Yampolskiy believes that this failure is within an acceptable range. 5. In the video game, non-player characters created weapons unpredicted by the creators. In June of this year, an AI-equipped video game “Elite: Dangerous” appeared outside the game plan of the game maker: AI was created. Out of the game set super weapons. A gaming site commented: "Human gamers may be defeated by strange weapons created by AI." It is worth mentioning that game developers then withdrew these weapons. 6. Artificial intelligence also has racial discrimination In the first “International Artificial Intelligence Pageant Contest”, the robot expert group based on the “Algorithm that can accurately assess human aesthetic and health standards” judged the face. However, because there is no diversified training set for artificial intelligence, the winners of the competition are all white. As Yampolskiy said, "beauty is in the pattern recognizer." 7. Predict crime with AI, involving racial discrimination Northpointe has developed an artificial intelligence system to predict the probability of an alleged offender’s second-time crime. This algorithm, called "Minority Report", was accused of prejudicing racial bias. Because in the test, black criminals are far more likely to be labeled than other races. In addition, another media ProPublica also pointed out that Northpointe's algorithm “although the problem of racial discrimination is removed, the correct rate is not high in most cases.” 8. The robot caused a child to be injured Knightscope platform has created a claim that is "Crime fighting robots." In July this year, the robot injured a 16-year-old boy in a shopping mall in Silicon Valley. The Los Angeles Times quoted the company as saying it was an "accidental accident." 9. China uses facial recognition technology to predict criminals. This is considered biased. Two researchers at Shanghai Jiaotong University in China published a paper entitled "Automated Inference on Criminality Using Face Images." According to the foreign media Mirror report, the researchers analyzed 1856 facial images of half criminals, and used some identifiable facial features to predict criminals, such as lip curvature, eye inner corner distance, There are even nose-mouth angles and so on. For this study, many industry players questioned the test results and raised ethical issues. 10. Insurance companies use Facebook big data to predict accident rates The final case was from Admiral Insurance, England’s largest car insurance company, and this year it plans to use Facebook users’ tweet data to test the association between social networking sites and good drivers. This is an abuse of artificial intelligence. Walsh thinks "Facebook has done a good job of limiting data." Due to Facebook's restrictions on the company's access to data, this project, called "first car quote," has not been opened. From the above cases, readers of Lei Fengwang can see that the AI system can easily become extreme. Therefore, humans need to train machine learning algorithms on diverse data sets to avoid AI bias. At the same time, with the continuous development of AI, ensuring the scientific testing of relevant research, ensuring data diversity, and establishing relevant ethical standards have become increasingly important.
MC Air series Showcase LED Display is mainly used in clothes chain store, Shopping mall, Library, Supermarket and so on. Product categories of LED Display For Fixed, Indoor Showcase led display and Outdoor Showcase led display. We have R & D and manufacturing team and the perfect after-sales service and technical support for wholesale high quality showcase Led Display,.
Showcase led display is highly customized regards to pixel pitches, resolution, sizes, shapes etc. Our showcase led display walls enjoys long life span, they are weatherproof units and can tolerate dust, humidity or rain.
For Outdoor Showcase led display, it is viewable in direct sunlight. The picture quality is not compromised in our solution, hence the display is crystal clear despite it`s day time.
Showcase LED Display
Showcase LED Display,Transparent LED Screen,LED Display Signs,Showcase LED Advertising Display
Shenzhen Macion Optoelectronics Technology Co.,Ltd. , https://www.macion-led.com
没有评论:
发表评论