facial recognition technology Archives | IT Business Edge Mon, 01 Aug 2022 18:08:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity https://www.itbusinessedge.com/business-intelligence/facial-recognition-privacy-concerns/ Fri, 22 Jul 2022 18:54:44 +0000 https://www.itbusinessedge.com/?p=140667 Artificial intelligence really is everywhere in our day-to-day lives, and one area that’s drawn a lot of attention is its use in facial recognition systems (FRS). This controversial collection of technology is one of the most hotly-debated among data privacy activists, government officials, and proponents of tougher measures on crime. Enough ink has been spilled […]

The post The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity appeared first on IT Business Edge.

]]>
Artificial intelligence really is everywhere in our day-to-day lives, and one area that’s drawn a lot of attention is its use in facial recognition systems (FRS). This controversial collection of technology is one of the most hotly-debated among data privacy activists, government officials, and proponents of tougher measures on crime.

Enough ink has been spilled on the topic to fill libraries, but this article is meant to distill some of the key arguments, viewpoints, and general information related to facial recognition systems and the impacts they can have on our privacy today.

What Are Facial Recognition Systems?

The actual technology behind FRS and who develops them can be complicated. It’s best to have a basic idea of how these systems work before diving into the ethical and privacy-related concerns related to using them.

How Do Facial Recognition Systems Work?

On a basic level, facial recognition systems operate on a three-step process. First, the hardware, such as a security camera or smartphone, records a photo or video of a person.

That photo or video is then fed into an AI program, which then maps and analyzes the geometry of a person’s face, such as the distance between eyes or the contours of the face. The AI also identifies specific facial landmarks, like forehead, eye sockets, eyes, or lips.

Finally, all these landmarks and measurements come together to create a digital signature which the AI compares against its database of digital signatures to see if there is a match or to verify someone’s identity. That digital signature is then stored on the database for future reference.

Read More At: The Pros and Cons of Enlisting AI for Cybersecurity

Use Cases of Facial Recognition Systems

A technology like facial recognition is broadly applicable to a number of different industries. Two of the most obvious are law enforcement and security. 

With facial recognition software, law enforcement agencies can track suspects and offenders unfortunate enough to be caught on camera, while security firms can utilize it as part of their access control measures, checking people’s faces as easily as they check people’s ID cards or badges.

Access control in general is the most common use case for facial recognition so far. It generally relies on a smaller database (i.e. the people allowed inside a specific building), meaning the AI is less likely to hit a false positive or a similar error. Plus, it’s such a broad use case that almost any industry imaginable could find a reason to implement the technology.

Facial recognition is also a hot topic in the education field, especially in the U.S. where vendors pitch facial recognition surveillance systems as a potential solution to the school shootings that plague the country more than any other. It has additional uses in virtual classroom platforms as a way to track student activity and other metrics.

In healthcare, facial recognition can theoretically be combined with emergent tech like emotion recognition for improved patient insights, such as being able to detect pain or monitor their health status. It can also be used during the check-in process as a no-contact alternative to traditional check-in procedures.

The world of banking saw an increase in facial recognition adoption during the COVID-19 pandemic, as financial institutions looked for new ways to safely verify customers’ identities.

Some workplaces already use facial recognition as part of their clock-in-clock-out procedures. It’s also seen as a way to monitor employee productivity and activity, preventing folks from “sleeping on the job,” as it were. 

Companies like HireVue were developing software using facial recognition that can determine the hireability of prospective employees. However, it discontinued the facial analysis portion of its software in 2021. In a statement, the firm cited public concerns over AI and a growing devaluation of visual components to the software’s effectiveness.

Retailers who sell age-restricted products, such as bars or grocery stores with liquor licenses, could use facial recognition to better prevent underaged customers from buying these products.

Who Develops Facial Recognition Systems?

The people developing FRS are many of the same usual suspects who push other areas of tech research forward. As always, academics are some of the primary contributors to facial recognition innovation. The field was started in academia in the 1950s by researchers like Woody Bledsoe.

In a modern day example, The Chinese University of Hong Kong created the GaussianFace algorithm in 2014, which its researchers reported had surpassed human levels of facial recognition. The algorithm scored 98.52% accuracy compared to the 97.53% accuracy of human performance.

In the corporate world, tech giants like Google, Facebook, Microsoft, IBM, and Amazon have been just some of the names leading the charge.

Google’s facial recognition is utilized in its Photos app, which infamously mislabeled a picture of software engineer Jacky Alciné and his friend, both of whom are black, as “gorillas” in 2015. To combat this, the company simply blocked “gorilla” and similar categories like “chimpanzee” and “monkey” on Photos.

Amazon was even selling its facial recognition system, Rekognition, to law enforcement agencies until 2020, when they banned the use of the software by police. The ban is still in effect as of this writing.

Facebook used facial recognition technology on its social media platform for much of the platform’s lifespan. However, the company shuttered the software in late 2021 as “part of a company-wide move to limit the use of facial recognition in [its] products.”

Additionally, there are firms who specialize in facial recognition software like Kairos, Clearview AI, and Face First who are contributing their knowledge and expertise to the field.

Read More At: The Value of Emotion Recognition Technology

Is This a Problem?

To answer the question of “should we be worried about facial recognition systems,” it will be best to look at some of the arguments that proponents and opponents of facial recognition commonly use.

Why Use Facial Recognition?

The most common argument in favor of facial recognition software is that it provides more security for everyone involved. In enterprise use cases, employers can better manage access control, while lowering the chance of employees becoming victims of identity theft.

Law enforcement officials say the use of FRS can aid their investigative abilities to make sure they catch perpetrators quickly and more accurately. It can also be used to track victims of human trafficking, as well as individuals who might not be able to communicate such as people with dementia. This, in theory, could reduce the number of police-caused deaths in cases involving these individuals.

Human trafficking and sex-related crimes are an oft-spoken refrain from proponents of this technology in law enforcement. Vermont, the state with the strictest bans on facial recognition, peeled back their ban slightly to allow for its use in investigating child sex crimes.

For banks, facial recognition could reduce the likelihood and frequency of fraud. With biometric data like what facial recognition requires, criminals can’t simply steal a password or a PIN and gain full access to your entire life savings. This would go a long way in stopping a crime for which the FTC received 2.8 million reports from consumers in 2021 alone.

Finally, some proponents say, the technology is so accurate now that the worries over false positives and negatives should barely be a concern. According to a 2022 report by the National Institute of Standards and Technology, top facial recognition algorithms can have a success rate of over 99%, depending on the circumstances.

With accuracy that good and use cases that strong, facial recognition might just be one of the fairest and most effective technologies we can use in education, business, and law enforcement, right? Not so fast, say the technology’s critics.

Why Ban Facial Recognition Technology?

First, the accuracy of these systems isn’t the primary concern for many critics of FRS. Whether the technology is accurate or not is inessential. 

While Academia is where much research on facial recognition is conducted, it is also where many of the concerns and criticisms are raised regarding the technology’s use in areas of life such as education or law enforcement

Northeastern University Professor of Law and Computer Science Woodrow Hartzog is one of the most outspoken critics of the technology. In a 2018 article Hartzog said, “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled.”

The concerns over privacy are numerous. As AI ethics researcher Rosalie A. Waelen put it in a 2022 piece for AI & Ethics, “[FRS] is expected to become omnipresent and able to infer a wide variety of information about a person.” The information it is meant to infer is not necessarily information an individual is willing to disclose.

Facial recognition technology has demonstrated difficulties identifying individuals of diverse races, ethnicities, genders, and age. This, when used by law enforcement, can potentially lead to false arrests, imprisonments, and other issues.

As a matter of fact, it already has. In Detroit, Robert Williams, a black man, was incorrectly identified by facial recognition software as a watch thief and falsely arrested in 2020. After being detained for 30 hours, he was released due to insufficient evidence after it became clear that the photographed suspect and Williams were not the same person.

This wasn’t the only time this happened in Detroit either. Michael Oliver was wrongly picked by facial recognition software as the one who threw a teacher’s cell phone and broke it.

A similar case happened to Nijeer Parks in late 2019 in New Jersey. Parks was detained for 10 days for allegedly shoplifting candy and trying to hit police with a car. Facial recognition falsely identified him as the perpetrator, despite Parks being 30 miles away from the incident at the time. 

There is also, in critics’ minds, an inherently dehumanizing element to facial recognition software and the way it analyzes the individual. Recall the aforementioned incident wherein Google Photos mislabeled Jacky Alciné and his friend as “gorillas.” It didn’t even recognize them as human. Given Google’s response to the situation was to remove “gorilla” and similar categories, it arguably still doesn’t.

Finally, there comes the issue of what would happen if the technology was 100% accurate. The dehumanizing element doesn’t just go away if Photos can suddenly determine that a person of color is, in fact, a person of color. 

The way these machines see us is fundamentally different from the way we see each other because the machines’ way of seeing goes only one way.  As Andrea Brighenti said, facial recognition software “leads to a qualitatively different way of seeing … .[the subject is] not even fully human. Inherent in the one way gaze is a kind of dehumanization of the observed.”

In order to get an AI to recognize human faces, you have to teach it what a human is, which can, in some cases, cause it to take certain human characteristics outside of its dataset and define them as decidedly “inhuman.”

That said, making facial recognition technology more accurate for detecting people of color only really serves to make law enforcement and business-related surveillance better. This means that, as researchers Nikki Stevens and Os Keyes noted in their 2021 paper for academic journal Cultural Studies, “efforts to increase representation are merely efforts to increase the ability of commercial entities to exploit, track and control people of colour.”

Final Thoughts

Ultimately, how much one worries about facial recognition technology comes down to a matter of trust. How much trust does a person place in the police or Amazon or any random individual who gets their hands on this software and the power it provides that they will only use it “for the right reasons”?

This technology provides institutions with power, and when thinking about giving power to an organization or an institution, one of the first things to consider is the potential for abuse of that power. For facial recognition, specifically for law enforcement, that potential is quite large.

In an interview for this article, Frederic Lederer, William & Mary Law School Chancellor Professor and Director of the Center for Legal & Court Technology, shared his perspective on the potential abuses facial recognition systems could facilitate in the U.S. legal system:

“Let’s imagine we run information through a facial recognition system, and it spits out 20 [possible suspects], and we had classified those possible individuals in probability terms. We know for a fact that the system is inaccurate and even under its best circumstances could still be dead wrong.

If what happens now is that the police use this as a mechanism for focusing on people and conducting proper investigation, I recognize the privacy objections, but it does seem to me to be a fairly reasonable use.

The problem is that police officers, law enforcement folks, are human beings. They are highly stressed and overworked human beings. And what little I know of reality in the field suggests that there is a large tendency to dump all but the one with the highest probability, and let’s go out and arrest him.”

Professor Lederer believes this is a dangerous idea, however:

“…since at minimum the way the system operates, it may be effectively impossible for the person to avoid what happens in the system until and unless… there is ultimately a conviction.”

Lederer explains that the Bill of Rights guarantees individuals a right to a “speedy trial.” However, court interpretations have borne out that arrested individuals will spend at least a year in prison before the courts even think about a speedy trial.

Add to that plea bargaining:

“…Now, and I don’t have the numbers, it is not uncommon for an individual in jail pending trial to be offered the following deal: ‘plead guilty, and we’ll see you’re sentenced to the time you’ve already been [in jail] in pre-trial, and you can walk home tomorrow.’ It takes an awful lot of guts for an individual to say ‘No, I’m innocent, and I’m going to stay here as long as is necessary.’

So if, in fact, we arrest the wrong person, unless there is painfully obvious evidence that the person is not the right person, we are quite likely to have individuals who are going to serve long periods of time pending trial, and a fair number of them may well plead guilty just to get out of the process.

So when you start thinking about facial recognition error, you can’t look at it in isolation. You have to ask: ‘How will real people deal with this information and to what extent does this correlate with everything else that happens?’ And at that point, there’s some really good concerns.”

As Lederer pointed out, these abuses already happen in the system, but facial recognition systems could exacerbate these abuses and even increase them. They can perpetuate pre-existing biases and systemic failings, and even if their potential benefits are enticing, the potential harm is too present and real to ignore.

Of the viable use cases of facial recognition that have been explored, the closest thing to a “safe” use case is ID verification. However, there are plenty of equally effective ID verification methods, some of which use biometrics like fingerprints.

In reality, there might not be any “safe” use case for facial recognition technology. Any advancements in the field will inevitably aid surveillance and control functions that have been core to the technology from its very beginning.

For now, Lederer said he hasn’t come to any firm conclusions as to whether the technology should be banned. But he and privacy advocates like Hartzog will continue to watch how it’s used.

Read Next: What’s Next for Ethical AI?

The post The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity appeared first on IT Business Edge.

]]>
Microsoft Drops Emotion Recognition as Facial Analysis Concerns Grow https://www.itbusinessedge.com/business-intelligence/microsoft-drops-emotion-recognition-facial-analysis/ Tue, 05 Jul 2022 23:38:48 +0000 https://www.itbusinessedge.com/?p=140609 Despite facial recognition technology’s potential, it faces mounting ethical questions and issues of bias. To address those concerns, Microsoft recently released its Responsible AI Standard and made a number of changes, the most noteworthy of which is to retire the company’s emotional recognition AI technology. Responsible AI Microsoft’s new policy contains a number of major […]

The post Microsoft Drops Emotion Recognition as Facial Analysis Concerns Grow appeared first on IT Business Edge.

]]>
Despite facial recognition technology’s potential, it faces mounting ethical questions and issues of bias.

To address those concerns, Microsoft recently released its Responsible AI Standard and made a number of changes, the most noteworthy of which is to retire the company’s emotional recognition AI technology.

Responsible AI

Microsoft’s new policy contains a number of major announcements.

  • New customers must apply for access to use facial recognition operations in Azure Face API, Computer Vision and Video Indexer, and existing customers have one year to apply and be approved for continued access to the facial recognition services.
  • Microsoft’s policy of Limited Access adds use case and customer eligibility requirements to access the services.
  • Facial detection capabilities—including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box—will remain generally available and do not require an application.

The centerpiece of the announcement is that the software giant “will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.”

Microsoft noted that “the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics…opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.”

Also read: AI Suffers from Bias—But It Doesn’t Have To

Moving Away from Facial Analysis

There are a number of reasons why major IT players have been moving away from facial recognition technologies, including limiting law enforcement access to the technology.

Fairness concerns

Automated facial analysis and facial recognition software have always generated controversy. Combine this with the often inherent societal biases of AI systems and the potential to exacerbate issues of bias intensifies. Many commercial facial analysis systems today inadvertently exhibit bias in categories such as race, age, culture, ethnicity and gender. Microsoft’s Responsible AI Standard implementation aims to help the company get ahead of potential issues of bias through its outlined Fairness Goals and Requirements.

Appropriate use controls

Regardless of Azure AI Custom Neural Voice’s boundless potential in entertainment, accessibility and education, it could also be greatly misused to deceive listeners by impersonating speakers. Microsoft’s Responsible AI program, plus the Sensitive Users review process essential to the Responsible AI Standard, reviewed its Facial Recognition and Custom Neural Voice technologies to develop a layered control framework. By limiting these technologies and implementing these controls, Microsoft hopes to safeguard the technologies and users from misuse while ensuring that their implementations are of value.

Lack of consensus on emotions

Microsoft’s decision to do away with public access to the emotion recognition and facial characteristics identification features of its AI is due to the lack of a distinct consensus on the definition of emotions. Experts from within and outside the company have pointed out the effect of this lack of consensus on emotion recognition technology products, as they generalize inferences across demographics, regions and use cases. This hinders the ability of the technology to provide appropriate solutions to the problems it aims to solve and ultimately impacts its trustworthiness.

The skepticism associated with the technology comes from its disputed efficacy and justification for its use. Human rights groups contend that emotion AI is discriminatory and manipulative. One study found that emotion AI constantly identified White subjects to have more positive emotions than Black subjects across two different facial recognition software platforms.

Intensifying privacy concerns

There is increasing scrutiny of facial recognition technologies and their unethical use for public surveillance and mass face detection without consent. Even though facial analysis collects generic data that is kept anonymous—such as Azure Face’s service that infers identity attributes like gender, hair, age, and more—anonymization does not alleviate ever-growing privacy concerns. Aside from consenting to such technologies, subjects may often harbor concerns about how the data collected by these technologies is stored, protected and used.

Also read: What Does Explainable AI Mean for Your Business?

Facial Detection and Bias

Algorithmic bias sees machine learning algorithms portray the biases of either their creators or their input data. The large-scale usage of these models in our technology-dependent lives means that their use cases are at risk of adopting and proliferating mass-produced biases.

Facial detection technologies struggle to produce accurate results in use cases involving women, dark-skinned people and older adults, as it is common to find these technologies being trained by facial image datasets dominated by Caucasian subjects. Bias in facial analysis and facial recognition technologies yields real-life consequences, such as the following examples.

Inaccuracy

Regardless of the strides that facial detection technologies have taken, bias often yields inaccurate results. Studies show that face detection technologies generally perform better with lighter skin complexions. One study reports findings of the identification of lighter-skinned males having a maximum error rate of 0.8% compared to up to 34.7% for dark-skinned women.

The failures in recognizing the faces of dark-skinned people have led to instances where the technology has been used wrongly by law enforcement. In February 2019, a Black man was accused of not only shoplifting but also attempting to hit a police officer with a car even though he was forty miles away from the scene of the crime at the time. He spent 10 days in jail and his defense cost him $5,000.

Since the case was dismissed for lack of evidence in November 2019, the man is suing the authorities involved for false arrest, imprisonment and civil rights violation. In a similar case, another man was wrongfully arrested as a result of inaccuracy in facial recognition. Such inaccuracies raise concerns about how many wrongful arrests and convictions may have taken place.

Several vendors of the technology, such as IBM, Amazon, and Microsoft, are aware of such limitations in areas like law enforcement and the implication of the technology for racial injustice and have moved to prevent potential misuse of their software. Microsoft’s policy prohibits the use of its Azure Face by or for state police in the United States.

Decision making

It is not uncommon to find facial analysis technology being used to assist in the evaluation of video interviews with job candidates. These tools influence recruiters’ hiring decisions using data they generate by analyzing facial expressions, movements, choice of words, and vocal tone. Such use cases are meant to lower hiring costs and increase efficiency by expediting the screening and recruitment of new hires.

However, failure to train such algorithms on datasets that are both large enough and diverse enough introduces bias. Such bias may deem certain people to be more suitable for employment than others. False positives or negatives may be the determinants of the employment of an unsuitable candidate as well as the rejection of the most suitable one. As long as they contain bias, the same results will likely be experienced in any similar context where the technology is used to make decisions based on people’s faces.

What’s Next for Facial Analysis?

All of this doesn’t mean that Microsoft is discarding its facial analysis and recognition technology entirely, as the company recognizes that these features and capabilities can yield value in controlled accessibility contexts. Microsoft’s biometric systems such as facial recognition will be limited to partners and customers of managed services. The availability of facial analysis will continue to be available to users until June 30, 2023, via the Limited Access arrangement.

Limited Access only applies to users working directly with the Microsoft accounts team. Microsoft has provided a list of approved Limited Access use cases here. Users have until then to submit applications for approval to continue using the technology. Such systems will also be limited to use cases that are deemed acceptable. Additionally, a code of conduct and guardrails will be used to ensure authorized users do not misuse the technology.

The Computer Vision and Video Indexer celebrity recognition features are also subject to Limited Access. Video Indexer’s face identification also falls under Limited. Customers will no longer have general access to facial recognition from these two services, in addition to Azure Face API.

As a result of its review, Microsoft announced, “We are undertaking responsible data collections to identify and mitigate disparities in the performance of the technology across demographic groups and assessing ways to present this information in a way that would be insightful and actionable for our customers.”

Read next: Best Machine Learning Software

The post Microsoft Drops Emotion Recognition as Facial Analysis Concerns Grow appeared first on IT Business Edge.

]]>
Facial Recognition Crosses a Line with Mask Removal Features https://www.itbusinessedge.com/business-intelligence/facial-recognition-mask-removal/ Tue, 19 Oct 2021 14:12:29 +0000 https://www.itbusinessedge.com/?p=139725 Clearview AI’s facial recognition now includes mask removal and enhancing features, but does this cross a line? Find out now.

The post Facial Recognition Crosses a Line with Mask Removal Features appeared first on IT Business Edge.

]]>
In 2020, masks became a large part of Western culture, becoming the only way many people felt safe venturing out in public. Major clothing companies started offering them, and people coordinated their masks to match their outfits. 

However, face masks also present a problem to facial recognition software, blocking several facial characteristics that the software would otherwise use to make an ID. Clearview AI, a company that creates facial recognition software aimed at law enforcement agencies and boasts a photo database of over 10 billion images, says it has solved this problem. The company pulls photos from news media, mugshot websites, and social media profiles.

In reality, mask removal and enhancement features on facial recognition software cross a line, and businesses should think twice before using them. Because so many of Clearview AI’s customers are law enforcement agencies, it’s likely that these new features will be used to make arrests.

Facial recognition’s mask removal

What Does Mask Removal Do?

Facial recognition uses artificial intelligence (AI) to analyze the geometry of a person’s face, including features like the distance between their eyes and the shape of their chin. The new mask removal tools would basically use other photos in the AI’s database to guess at what a person might look like under their mask. The model takes data points from the part of the face it can see (the eyes, forehead, and possibly the ears), and then attempts to match those to other images using statistical patterns to determine possible facial characteristics that the mask is hiding.

These features could be helpful for emotion recognition and advertising, as long as organizations use them with permission. For example, medical staff could determine a patient’s level of pain or discomfort in a waiting room without being able to see their whole face, allowing them to determine which patients they need to see first. However, many of Clearview’s clients seem to be law enforcement agencies.

What’s the Problem with Using Facial Recognition AI to “Remove” Masks?

Considering how much of a person’s face a mask actually blocks, that means about two-thirds of a facial recognition match using these features would be strictly guesswork. We already know that facial recognition has some major issues with accuracy, especially when it comes to identifying women of color, so adding guesswork on top of that is just asking for trouble. 

“I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process, I would expect a plethora of unintended bias to creep in,” said MIT professor Aleksander Madry in an interview with Wired. Facial recognition models already don’t get enough training with people of color, so the likelihood of the model accurately identifying a non-white person with a mask on is extremely low.

Carlos Anchia, CEO of Plainsight, explains how this technology would work. “Attempting to apply the technology to facial feature prediction is fraught with complexity and potential for inaccuracy,” he says. “In one approach to automating a prediction of features hidden by masks, the model would first remove the mask in the image and then create a void. This void would need to have that portion of the face replaced with predicted facial features resulting from the matching images. In cases like this, confidence in the predictive (altered) image would likely be low and would require an enormous amount of data for each image/person.”

Also read: The Struggles & Solutions to Bias in AI

The Dangers of Increasing Facial Recognition Use

One of the issues with increasing facial recognition use is that many users, especially those in law enforcement, don’t really seem to be addressing how inaccurate the technology is. Also, as we learned from the recent Facebook hearings, AI algorithms require human oversight for the best results, but understaffed organizations may not provide this, especially if it won’t help their bottom line.

“My intention with this technology is always to have it under human control. When AI gets it wrong it is checked by a person,” Clearview AI co-founder and CEO Hoan Ton-That told Wired. As great as that sounds, we know that organizations don’t always use technology exactly the way it was originally intended. After all, facial recognition isn’t the only problematic “science” law enforcement agencies use to catch criminals, so there’s no guarantee that they won’t use this incorrectly as well. 

Businesses Must Be Cautious About Using this Technology

While there’s an obvious demand for accurate facial recognition technology, businesses have to be careful about using it, especially in its current iteration. Anchia says, “With the new Clearview AI technology, the only data points that are common from image-to-image of individuals would be the exposed (unmasked) images. To perform at operational accuracy with a high degree of robustness, machine models often require additional data points to bolster the confidence in the predictions. In these cases, the large number of data points required to achieve high-accuracy prediction quality is not present.”

Facial recognition AI is, unfortunately, not accurate enough to make life-changing decisions. Instead, businesses can use it to improve their product lines or give employees passwordless access to devices. Using facial recognition in these ways helps avoid some of the bias issues that the technology brings with it, while still giving it a chance to improve its accuracy.

Read next: Edge AI: The Future of Artificial Intelligence and Edge Computing

The post Facial Recognition Crosses a Line with Mask Removal Features appeared first on IT Business Edge.

]]>
NVIDIA Doesn’t Yet Realize the Power of Its Digital Brain https://www.itbusinessedge.com/it-management/nvidia-doesnt-yet-realize-the-power-of-its-digital-brain/ Mon, 22 Feb 2016 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/nvidia-doesnt-yet-realize-the-power-of-its-digital-brain/ CES Announces the Most Innovative Tech Products for 2016 Every once in a while, a company comes up with something that is more amazing than the firm actually realizes. I think that was the case with the initial iPod. It was an almost hobby-like Apple product but eventually the market, and Apple, caught on that […]

The post NVIDIA Doesn’t Yet Realize the Power of Its Digital Brain appeared first on IT Business Edge.

]]>
Slide Show

CES Announces the Most Innovative Tech Products for 2016

Every once in a while, a company comes up with something that is more amazing than the firm actually realizes. I think that was the case with the initial iPod. It was an almost hobby-like Apple product but eventually the market, and Apple, caught on that it could be so much more. Not only was the iPod critical to the firm’s turnaround, but it spawned the iPad and the iPhone (a product that nearly ensures Apple’s entire valuation today).

I think that is also the case with the NVIDIA DRIVE PX 2 computer. Yes, it is being applied to self-driving cars initially. But it is a computer that can see, hear, evaluate, learn and respond instantly to a massive number of sensory inputs. It is really the first commercially available brain-like product.

This suggests that the applications that this very unique computer could be applied to are far broader than just self-driving cars. It could be applied to general robotics, large-scale smart buildings and even smart cities.

Let me explain.

DRIVE PX 2 Specs

The DRIVE PX 2 is a relatively small computer designed to handle a combined 2.3 Teraflops of data, blending 12 cameras, radar, lidar, and a variety of sensors into a data stream that can be applied to a decision matrix. This matrix is the result of deep learning methods that allow the computer to recognize different objects and respond appropriately to them. These objects cover virtually all vehicles, signs, lane markers, people, animals, other forms of transportation, weather, lighting and road anomalies (like potholes).

This product is built to use NVIDA DIGITS, a heavily researched set of algorithms designed to make a computer capable of making decisions in real time without human interaction. It is developed on a system that can handle 7 Teraflops of information and form the basis for training the DRIVE PX 2.

The result is a digital brain that can look 360 degrees at once across a variety of sensors and create a real-time emulation of the world around it, about which it can then make real-time decisions. And the 360-degree limit is only because that’s all a car really needs. In theory, it could also look over and under the car at the same time, as well as provide a global view of the vehicle.

So, basically, this is an all-seeing brain trained to be able to view and respond to anything the sensor can see around it, potentially in a full globe.

So what else could it be used for?

Military Defense

One of the most obvious uses would be military defense systems. The older Phalanx system in use for U.S. military ships has a small fraction of the processing power available to DRIVE PX 2 and can only defend against a comparatively small arc of potential attack vectors. This has more recently been enhanced with a SeaRAM-blended system of guns and missiles, providing a more comprehensive defense solution.

Updating these systems with another purpose-built computer would be prohibitively expensive but that’s what makes a general-purpose product like the DRIVE PX 2 attractive: It can be adapted to almost anything. One computer, or two if you wanted full redundancy, could cover a ship both above and below the waterline, with reaction times that a human couldn’t match. It could even include taking emergency control of the helm to safely avoid the threat. Since DRIVE PX 2 is designed to network with other vehicles, it‘s already set up to coordinate with a battle group of ships, coordinating a response between a carrier and its destroyer escort far better and far more cheaply than deployed systems.

You’d end up with a result both better and far cheaper than is in use today, and it could be retrofitted relatively easily given its small size and teachable nature.

Smart Cities, Smart Buildings

The idea of integrating actual cameras into smart buildings and smart cities is hardly new, but doing it inexpensively has been very difficult. When you talk about being able to respond in real time to problems and threats having to do with people, facial recognition systems tend to be too slow and so unique that a human needs to be in the data chain to make decisions.

However, the DRIVE PX 2 is designed to handle visual information expertly and react to it. Think of traffic lights that could recognize police or fire vehicles; systems that would automatically block traffic or shut down power in the face of related problems; security systems that could recognize someone that is authorized vs. an employee who isn’t supposed to be in an area or is doing something inappropriate and more effectively track movement across a city or through a building. These would be capabilities this system could adapt to easily.

Integrating security with building or city management systems has normally been problematic. But a system like DRIVE PX 2 could be an ideal way of making this all real and would be far less expensive than the typical highly customized approach.

Wrapping Up: And Robots

The DRIVE PX 2 can do a lot more than just make cars autonomous. Given that an autonomous car is basically a rolling robot, it seems like the next step is for DRIVE PX 2 to be applied to construction equipment, public transportation, factory floors and, as noted, defense systems and integrated security smart city/building systems. For now, however, DRIVE PX 2 is only focused on cars, which is why I think NVIDIA hasn’t yet realized just how powerful a digital brain tied to deep learning could actually be.  

Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm.  With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+.

The post NVIDIA Doesn’t Yet Realize the Power of Its Digital Brain appeared first on IT Business Edge.

]]>