Software To Recognize Faces Is Found To Be Biased
The majority ( )of commercial facial-recognition systems exhibit bias ( ), according to a study from a federal agency released recently, underscoring ( )questions about a technology increasingly used by police departments and federal agencies to identify suspected criminals.
The systems falsely identified African American and Asian faces 10 times to 100 times more than Caucasian faces, the National Institute of Standards and Technology reported. Among a database of photos used by law enforcement ( )agencies in the United States, the highest error rates came in identifying Native Americans, the study found.
The technology also had more difficulty identifying women than men. And it falsely identified older adults up to 10 times more than middle-aged adults.
The new report comes at a time of mounting ( )concern from lawmakers and civil rights groups over the proliferation ( )of facial recognition. Proponents ( ) view it as ( )an important tool for catching criminals and tracking terrorists. Tech companies market it as a convenience that can be used to help identify people in photos or in lieu of ( )a password to unlock smartphones.
Civil liberties experts, however, warn that the technology — which can be used to track people at a distance without their knowledge — has the potential to lead to ubiquitous ( ) surveillance ( ), chilling freedom of movement and speech. Last year, San Francisco, Oakland and Berkeley in California and the Massachusetts communities of Somerville and Brookline banned government use of the technology.
“One false match can lead to missed flights, lengthy interrogations ( ), watch list placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the FBI, Customs ( )and Border Protection and local law enforcement must immediately halt ( )the deployment ( )of this dystopian ( )technology.”
The federal report is one of the largest studies of its kind. The researchers had access ( )to more than 18 million photos of about 8.5 million people from American mug shots ( ), visa applications and border-crossing databases.
The National Institute of Standards and Technology tested 189 facial-recognition algorithms ( )from 99 developers, representing the majority of commercial developers. They included systems from Microsoft, biometric ( )technology companies like Cognitec, and Megvii, an artificial intelligence company in China.
The federal report confirms earlier studies from MIT that reported that facial-recognition systems from some large tech companies had much lower accuracy ( )rates in identifying the female and darker-skinned faces than the white male faces.
臉部辨識系統 有種族性別「偏見」
根據美國聯邦機構新近發表的研究結果,多數商用臉部辨識系統存在偏差,凸顯出這種科技的問題,而警察部門和聯邦機構使用這項科技辨識可疑罪犯的程度正不斷增高。
國家標準及科技研究所指出,比起白人臉孔,這些系統錯誤辨識非裔美國人和亞洲臉孔的機率高出10至100倍。研究發現,在美國執法單位使用的照片資料庫中,辨識美國原住民的錯誤率最高。
這項科技辨識女性也比男性更困難,而且辨識老人的錯誤率高達中年人的10倍。
這份新報告發表之際,國會議員和公民運動團體對臉部辨識的普及正日益感到憂慮。支持者視它為抓捕罪犯和追蹤恐怖分子的重要工具,科技公司將它行銷成方便好用的東西,可辨識照片中的人或作為密碼解鎖智慧手機。
然而,公民自由專家警告,這項科技能在人們不知情的狀況下從一定距離外追蹤他們,有可能導致無所不在的監控,壓抑行動和言論自由。去年,加州的舊金山、奧克蘭和柏克萊,以及麻州的薩默維爾和布魯克萊恩等社區,皆明令禁止政府使用此一科技。
「一個錯誤的比對,就可能導致錯過班機、冗長的訊問、列入觀察名單、與警察緊張的遭遇、錯誤逮捕甚至更糟的狀況。」美國公民自由聯盟的政策分析師傑.史丹利在聲明中這麼說。「包括聯邦調查局、海關及邊境保護局和地方執法單位等政府機構,必須立即停止部署這項反烏托邦科技。」
這份聯邦報告是同類報告中規模最大者之一。研究人員取用來自美國嫌犯大頭照、簽證申請及跨越邊境資料庫中,約850萬人的1800萬張以上的照片。
國家標準及科技研究所測試來自99個開發商的189個臉部辨識演算程式,涵蓋大多數商用開發商,包括微軟系統,生物科技公司如Cognitec,以及中國人工智慧公司曠視科技。
這份聯邦報告證實麻省理工學院先前的研究結果,一些大型科技公司的臉部辨識系統,辨識女性和較深膚色臉孔的正確率,比辨識白人男性低得多。
#高雄人 #學習英文 請找 #多益達人林立英文
#高中英文 #成人英文
#多益家教班 #商用英文
#國立大學外國語文學系講師
#新聞英文
同時也有10000部Youtube影片,追蹤數超過2,910的網紅コバにゃんチャンネル,也在其Youtube影片中提到,...
crossing over生物 在 第一次減數分裂後的子細胞內,可能出現哪些染色體? 的必吃
(A)DNA 的複製(B)出現紡錘體(C)出現分裂溝(D)在細胞中央形成細胞板。 A 20. 真核生物行有絲分裂時,不論何種細胞皆會出現下列哪種構造? (A)染色體 ... ... <看更多>