Data privacy Archives | IT Business Edge Wed, 02 Nov 2022 15:26:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity https://www.itbusinessedge.com/business-intelligence/facial-recognition-privacy-concerns/ Fri, 22 Jul 2022 18:54:44 +0000 https://www.itbusinessedge.com/?p=140667 Artificial intelligence really is everywhere in our day-to-day lives, and one area that’s drawn a lot of attention is its use in facial recognition systems (FRS). This controversial collection of technology is one of the most hotly-debated among data privacy activists, government officials, and proponents of tougher measures on crime. Enough ink has been spilled […]

The post The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity appeared first on IT Business Edge.

]]>
Artificial intelligence really is everywhere in our day-to-day lives, and one area that’s drawn a lot of attention is its use in facial recognition systems (FRS). This controversial collection of technology is one of the most hotly-debated among data privacy activists, government officials, and proponents of tougher measures on crime.

Enough ink has been spilled on the topic to fill libraries, but this article is meant to distill some of the key arguments, viewpoints, and general information related to facial recognition systems and the impacts they can have on our privacy today.

What Are Facial Recognition Systems?

The actual technology behind FRS and who develops them can be complicated. It’s best to have a basic idea of how these systems work before diving into the ethical and privacy-related concerns related to using them.

How Do Facial Recognition Systems Work?

On a basic level, facial recognition systems operate on a three-step process. First, the hardware, such as a security camera or smartphone, records a photo or video of a person.

That photo or video is then fed into an AI program, which then maps and analyzes the geometry of a person’s face, such as the distance between eyes or the contours of the face. The AI also identifies specific facial landmarks, like forehead, eye sockets, eyes, or lips.

Finally, all these landmarks and measurements come together to create a digital signature which the AI compares against its database of digital signatures to see if there is a match or to verify someone’s identity. That digital signature is then stored on the database for future reference.

Read More At: The Pros and Cons of Enlisting AI for Cybersecurity

Use Cases of Facial Recognition Systems

A technology like facial recognition is broadly applicable to a number of different industries. Two of the most obvious are law enforcement and security. 

With facial recognition software, law enforcement agencies can track suspects and offenders unfortunate enough to be caught on camera, while security firms can utilize it as part of their access control measures, checking people’s faces as easily as they check people’s ID cards or badges.

Access control in general is the most common use case for facial recognition so far. It generally relies on a smaller database (i.e. the people allowed inside a specific building), meaning the AI is less likely to hit a false positive or a similar error. Plus, it’s such a broad use case that almost any industry imaginable could find a reason to implement the technology.

Facial recognition is also a hot topic in the education field, especially in the U.S. where vendors pitch facial recognition surveillance systems as a potential solution to the school shootings that plague the country more than any other. It has additional uses in virtual classroom platforms as a way to track student activity and other metrics.

In healthcare, facial recognition can theoretically be combined with emergent tech like emotion recognition for improved patient insights, such as being able to detect pain or monitor their health status. It can also be used during the check-in process as a no-contact alternative to traditional check-in procedures.

The world of banking saw an increase in facial recognition adoption during the COVID-19 pandemic, as financial institutions looked for new ways to safely verify customers’ identities.

Some workplaces already use facial recognition as part of their clock-in-clock-out procedures. It’s also seen as a way to monitor employee productivity and activity, preventing folks from “sleeping on the job,” as it were. 

Companies like HireVue were developing software using facial recognition that can determine the hireability of prospective employees. However, it discontinued the facial analysis portion of its software in 2021. In a statement, the firm cited public concerns over AI and a growing devaluation of visual components to the software’s effectiveness.

Retailers who sell age-restricted products, such as bars or grocery stores with liquor licenses, could use facial recognition to better prevent underaged customers from buying these products.

Who Develops Facial Recognition Systems?

The people developing FRS are many of the same usual suspects who push other areas of tech research forward. As always, academics are some of the primary contributors to facial recognition innovation. The field was started in academia in the 1950s by researchers like Woody Bledsoe.

In a modern day example, The Chinese University of Hong Kong created the GaussianFace algorithm in 2014, which its researchers reported had surpassed human levels of facial recognition. The algorithm scored 98.52% accuracy compared to the 97.53% accuracy of human performance.

In the corporate world, tech giants like Google, Facebook, Microsoft, IBM, and Amazon have been just some of the names leading the charge.

Google’s facial recognition is utilized in its Photos app, which infamously mislabeled a picture of software engineer Jacky Alciné and his friend, both of whom are black, as “gorillas” in 2015. To combat this, the company simply blocked “gorilla” and similar categories like “chimpanzee” and “monkey” on Photos.

Amazon was even selling its facial recognition system, Rekognition, to law enforcement agencies until 2020, when they banned the use of the software by police. The ban is still in effect as of this writing.

Facebook used facial recognition technology on its social media platform for much of the platform’s lifespan. However, the company shuttered the software in late 2021 as “part of a company-wide move to limit the use of facial recognition in [its] products.”

Additionally, there are firms who specialize in facial recognition software like Kairos, Clearview AI, and Face First who are contributing their knowledge and expertise to the field.

Read More At: The Value of Emotion Recognition Technology

Is This a Problem?

To answer the question of “should we be worried about facial recognition systems,” it will be best to look at some of the arguments that proponents and opponents of facial recognition commonly use.

Why Use Facial Recognition?

The most common argument in favor of facial recognition software is that it provides more security for everyone involved. In enterprise use cases, employers can better manage access control, while lowering the chance of employees becoming victims of identity theft.

Law enforcement officials say the use of FRS can aid their investigative abilities to make sure they catch perpetrators quickly and more accurately. It can also be used to track victims of human trafficking, as well as individuals who might not be able to communicate such as people with dementia. This, in theory, could reduce the number of police-caused deaths in cases involving these individuals.

Human trafficking and sex-related crimes are an oft-spoken refrain from proponents of this technology in law enforcement. Vermont, the state with the strictest bans on facial recognition, peeled back their ban slightly to allow for its use in investigating child sex crimes.

For banks, facial recognition could reduce the likelihood and frequency of fraud. With biometric data like what facial recognition requires, criminals can’t simply steal a password or a PIN and gain full access to your entire life savings. This would go a long way in stopping a crime for which the FTC received 2.8 million reports from consumers in 2021 alone.

Finally, some proponents say, the technology is so accurate now that the worries over false positives and negatives should barely be a concern. According to a 2022 report by the National Institute of Standards and Technology, top facial recognition algorithms can have a success rate of over 99%, depending on the circumstances.

With accuracy that good and use cases that strong, facial recognition might just be one of the fairest and most effective technologies we can use in education, business, and law enforcement, right? Not so fast, say the technology’s critics.

Why Ban Facial Recognition Technology?

First, the accuracy of these systems isn’t the primary concern for many critics of FRS. Whether the technology is accurate or not is inessential. 

While Academia is where much research on facial recognition is conducted, it is also where many of the concerns and criticisms are raised regarding the technology’s use in areas of life such as education or law enforcement

Northeastern University Professor of Law and Computer Science Woodrow Hartzog is one of the most outspoken critics of the technology. In a 2018 article Hartzog said, “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled.”

The concerns over privacy are numerous. As AI ethics researcher Rosalie A. Waelen put it in a 2022 piece for AI & Ethics, “[FRS] is expected to become omnipresent and able to infer a wide variety of information about a person.” The information it is meant to infer is not necessarily information an individual is willing to disclose.

Facial recognition technology has demonstrated difficulties identifying individuals of diverse races, ethnicities, genders, and age. This, when used by law enforcement, can potentially lead to false arrests, imprisonments, and other issues.

As a matter of fact, it already has. In Detroit, Robert Williams, a black man, was incorrectly identified by facial recognition software as a watch thief and falsely arrested in 2020. After being detained for 30 hours, he was released due to insufficient evidence after it became clear that the photographed suspect and Williams were not the same person.

This wasn’t the only time this happened in Detroit either. Michael Oliver was wrongly picked by facial recognition software as the one who threw a teacher’s cell phone and broke it.

A similar case happened to Nijeer Parks in late 2019 in New Jersey. Parks was detained for 10 days for allegedly shoplifting candy and trying to hit police with a car. Facial recognition falsely identified him as the perpetrator, despite Parks being 30 miles away from the incident at the time. 

There is also, in critics’ minds, an inherently dehumanizing element to facial recognition software and the way it analyzes the individual. Recall the aforementioned incident wherein Google Photos mislabeled Jacky Alciné and his friend as “gorillas.” It didn’t even recognize them as human. Given Google’s response to the situation was to remove “gorilla” and similar categories, it arguably still doesn’t.

Finally, there comes the issue of what would happen if the technology was 100% accurate. The dehumanizing element doesn’t just go away if Photos can suddenly determine that a person of color is, in fact, a person of color. 

The way these machines see us is fundamentally different from the way we see each other because the machines’ way of seeing goes only one way.  As Andrea Brighenti said, facial recognition software “leads to a qualitatively different way of seeing … .[the subject is] not even fully human. Inherent in the one way gaze is a kind of dehumanization of the observed.”

In order to get an AI to recognize human faces, you have to teach it what a human is, which can, in some cases, cause it to take certain human characteristics outside of its dataset and define them as decidedly “inhuman.”

That said, making facial recognition technology more accurate for detecting people of color only really serves to make law enforcement and business-related surveillance better. This means that, as researchers Nikki Stevens and Os Keyes noted in their 2021 paper for academic journal Cultural Studies, “efforts to increase representation are merely efforts to increase the ability of commercial entities to exploit, track and control people of colour.”

Final Thoughts

Ultimately, how much one worries about facial recognition technology comes down to a matter of trust. How much trust does a person place in the police or Amazon or any random individual who gets their hands on this software and the power it provides that they will only use it “for the right reasons”?

This technology provides institutions with power, and when thinking about giving power to an organization or an institution, one of the first things to consider is the potential for abuse of that power. For facial recognition, specifically for law enforcement, that potential is quite large.

In an interview for this article, Frederic Lederer, William & Mary Law School Chancellor Professor and Director of the Center for Legal & Court Technology, shared his perspective on the potential abuses facial recognition systems could facilitate in the U.S. legal system:

“Let’s imagine we run information through a facial recognition system, and it spits out 20 [possible suspects], and we had classified those possible individuals in probability terms. We know for a fact that the system is inaccurate and even under its best circumstances could still be dead wrong.

If what happens now is that the police use this as a mechanism for focusing on people and conducting proper investigation, I recognize the privacy objections, but it does seem to me to be a fairly reasonable use.

The problem is that police officers, law enforcement folks, are human beings. They are highly stressed and overworked human beings. And what little I know of reality in the field suggests that there is a large tendency to dump all but the one with the highest probability, and let’s go out and arrest him.”

Professor Lederer believes this is a dangerous idea, however:

“…since at minimum the way the system operates, it may be effectively impossible for the person to avoid what happens in the system until and unless… there is ultimately a conviction.”

Lederer explains that the Bill of Rights guarantees individuals a right to a “speedy trial.” However, court interpretations have borne out that arrested individuals will spend at least a year in prison before the courts even think about a speedy trial.

Add to that plea bargaining:

“…Now, and I don’t have the numbers, it is not uncommon for an individual in jail pending trial to be offered the following deal: ‘plead guilty, and we’ll see you’re sentenced to the time you’ve already been [in jail] in pre-trial, and you can walk home tomorrow.’ It takes an awful lot of guts for an individual to say ‘No, I’m innocent, and I’m going to stay here as long as is necessary.’

So if, in fact, we arrest the wrong person, unless there is painfully obvious evidence that the person is not the right person, we are quite likely to have individuals who are going to serve long periods of time pending trial, and a fair number of them may well plead guilty just to get out of the process.

So when you start thinking about facial recognition error, you can’t look at it in isolation. You have to ask: ‘How will real people deal with this information and to what extent does this correlate with everything else that happens?’ And at that point, there’s some really good concerns.”

As Lederer pointed out, these abuses already happen in the system, but facial recognition systems could exacerbate these abuses and even increase them. They can perpetuate pre-existing biases and systemic failings, and even if their potential benefits are enticing, the potential harm is too present and real to ignore.

Of the viable use cases of facial recognition that have been explored, the closest thing to a “safe” use case is ID verification. However, there are plenty of equally effective ID verification methods, some of which use biometrics like fingerprints.

In reality, there might not be any “safe” use case for facial recognition technology. Any advancements in the field will inevitably aid surveillance and control functions that have been core to the technology from its very beginning.

For now, Lederer said he hasn’t come to any firm conclusions as to whether the technology should be banned. But he and privacy advocates like Hartzog will continue to watch how it’s used.

Read Next: What’s Next for Ethical AI?

The post The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity appeared first on IT Business Edge.

]]>
Healthcare Cybersecurity: The Challenges of Protecting Patient Data https://www.itbusinessedge.com/security/healthcare-cybersecurity-protecting-patient-data/ Fri, 03 Jun 2022 20:12:50 +0000 https://www.itbusinessedge.com/?p=140520 Digital technology has dramatically transformed the healthcare industry, and in some ways this transformation is the stuff of sci-fi. Look at the Human Genome Project. This project successfully mapped out human DNA a decade ago. Today, individuals can conduct affordable genetic testing at home. Similarly, it wasn’t too long ago that health records were kept […]

The post Healthcare Cybersecurity: The Challenges of Protecting Patient Data appeared first on IT Business Edge.

]]>
Digital technology has dramatically transformed the healthcare industry, and in some ways this transformation is the stuff of sci-fi. Look at the Human Genome Project. This project successfully mapped out human DNA a decade ago. Today, individuals can conduct affordable genetic testing at home.

Similarly, it wasn’t too long ago that health records were kept on physical shelves in thick folders. But today they’re in the form of Electronic Health Records (EHRs), and patients can easily access them via online platforms or Internet of Things (IoT) devices.

While this easy accessibility and abundance of data benefits patients, it’s even more useful for cybercriminals. It has been recently reported that nearly 90% of healthcare institutions faced a data breach in the past two years. According to Statista, the average cost of a healthcare data breach is over $9 million.

Also read: Top Cybersecurity Companies & Service Providers

Why is Healthcare the No. 1 Target of Cyber Criminals?

Today, healthcare information is even more valuable than financial data. Therefore, the exposure of an individual’s healthcare data is a critical privacy risk and has far-reaching personal consequences.

In case of a healthcare data breach, the patient or an individual might experience embarrassment due to health conditions or personal issues, and the breached data might be used for illegal activities like blackmailing, identity theft, and fraud.

Unfortunately, because of a number of cybersecurity weaknesses, breaching healthcare data can be a relatively simple job for hackers.

6 Cybersecurity Challenges of the Healthcare Industry

As new technology and compliance regulations arrive on the scene, every industry faces new cybersecurity threats to personal data. Unfortunately for healthcare, there are many reasons why it’s become the Number One target of cybercriminals. Here we look into the six significant healthcare cybersecurity challenges and solutions in today’s digital age.

Phishing

Recent research shows that phishing is the most common cybercrime in the healthcare industry. In a typical phishing attack, users are tricked into disclosing passwords or other relevant personal information. Emails are the most common platform for this cybercrime. For example, a hacker sends an email to a healthcare employee stating that their password is no longer valid and sends a link to reset their password. If the employee is not knowledgeable about phishing or lacks proper training, he may follow the link and reset his password – this is all a hacker needs to put a healthcare institution at risk.

Also read: Best Cybersecurity Training & Courses for Employees

The IoT challenge

The healthcare industry has quickly adopted IoT devices and conducted massive IoT innovations over the past decade. But unfortunately, cybersecurity innovations lag behind IoT innovations and adoption. Although positives have been seen from IoT adoption in the healthcare industry, cybersecurity issues are rising.

Hackers take advantage of IoT providers’ rush to roll out devices without considering the cybersecurity implications. Therefore, with numerous IoT devices circulating in the market and health organizations, hackers easily exploit their vulnerabilities.

Also read: Best IoT Device Management Platforms & Software

Distributed denial-of-service

Hackers devise distributed denial-of-service (DDoS) attacks to flood a business organizations’ network with internet traffic to the point where the business ceases to operate normally. DDoS attacks are usually carried out along with malware or ransomware attacks (which will be discussed later). In sophisticated DDoS attacks, hackers fill a network with massive volumes of data from millions of hacked computers.

Therefore, DDoS attacks are hazardous to healthcare providers who need access to a faster network to provide efficient patient care, including email communication, filling prescriptions, and accessing and retrieving health records.

See also: 5 Best Practices for Mitigating DDoS Attacks

Ransomware attacks

A ransomware attack is a sort of malware attack devised by a cybercriminal to infect systems, devices, and files to gain a ransom from the victim. Most common ransomware attacks come as requests to click on a malicious link, view a malware ad (malvertising), or respond to phishing emails.

Ransomware slows down or ceases business operations until a ransom has been paid to the hacker. Untrained employees may fall into these traps, and it can cost a health organization lots of time and money. A health organization could have used this time and money to invest in new technology or improve patient care standards.

Also read: How to Prevent & Respond to Ransomware

Data breaches

Protected Health Information (PHI) contains personal data, including Social Security numbers, contact information, test results, diagnoses, and prescriptions. There is indeed an active black market for PHI.

So hackers are interested in PHI because an individual’s health and diagnosis history cannot be simply deleted or hidden like credit card numbers. Once hackers obtain this information, they can use it to get loans, medication, insurance claims, or set up credit lines—everything under fake identities.

The Health Insurance Portability and Accountability Act (HIPAA) states that healthcare organizations must practice adequate data security measures in collecting and distributing PHI. But in reality, most organizations fail to update protocols, implement security measures, and adequately staff their IT departments.

Unauthorized disclosure

The unauthorized access or disclosure of PHI is equally dangerous and damaging as a ransomware attack. PHI exposure results from the intentional and accidental negligence of providers and employees.

The South Florida Community Care Network’s case is a real-world example of unauthorized disclosure. In September 2021, the organization announced that a former employee had been emailing internal documents containing PHI to their personal email inbox for several months.

While some of these instances arise from malicious intent, in most cases, these incidents stem from negligence or a lack of proper cybersecurity measures.

Tackling Healthcare Cybersecurity Challenges

Knowledge is power in the digital Information Age. Proper knowledge also plays a significant role in tackling cybersecurity challenges. Let’s look at some of the ways a healthcare organization can improve its cybersecurity efforts to ensure proper management and protection of sensitive data.

Create a cybersecurity culture

It pays well to build a cybersecurity culture into the structure of a health organization. Activities to create this culture include continuous ongoing cybersecurity training and educational programs for each employee that emphasize their role in protecting PHI.

The protection of devices

Since healthcare organizations are undergoing digital transformation and becoming more tech-savvy, their dependence on smartphones, tablets, and other IoT devices has risen. Therefore, these organizations must follow cybersecurity measures like data encryption to ensure data security.

Install antivirus application

Antivirus software enhances network and data security; however, these applications should be constantly updated. Constant updating is essential for a health organization’s protection against ever-changing cyber threats.

A zero-trust policy is the best policy

A health organization shouldn’t make the PHI readily available to anyone. Instead, always exercise control over the network access to PHI under a zero-trust policy. This policy grants access to PHI only to those who view and use it within the limits of their daily work schedules.

See the Top Zero Trust Security Solutions & Software

Maintain strong passwords

This may sound silly but creating and regularly updating strong passwords plays a vital role in an organization’s cybersecurity. A typical strong password is 12 to 14 characters long and should be a combination of numbers, symbols, and upper case and lower-case letters. Not only that, employees must understand the relevance of setting up strong passwords and the difference between strong and weak passwords.

Strong Cybersecurity in Healthcare Demands Expertise

In precisely the same way a health organization cleans up a human health system and helps build strong immunity, several third-party healthcare cybersecurity solutions can help your health organization in various ways. Although you can implement cybersecurity measures, it would be challenging to maintain strong cybersecurity without external yet additional support in a constantly evolving cyber threat landscape.

In addition, an external healthcare solution also improves your organization’s cyber health as it continuously monitors third-party vendor and IoT platforms, safeguards PHI, and remains in compliance with the evolving regulatory standards of the healthcare industry.

See the Best Managed Security Service Providers (MSSPs)

The post Healthcare Cybersecurity: The Challenges of Protecting Patient Data appeared first on IT Business Edge.

]]>
Bank-grade Security: Is it the Ultimate Cybersecurity Solution? https://www.itbusinessedge.com/security/bank-grade-security-is-it-the-ultimate-cybersecurity-solution/ Thu, 02 Dec 2021 14:00:00 +0000 https://www.itbusinessedge.com/?p=139863 Bank-grade security are technologies that meet or exceed specific cybersecurity requirements set by banks worldwide. Here is all you need to know.

The post Bank-grade Security: Is it the Ultimate Cybersecurity Solution? appeared first on IT Business Edge.

]]>
In today’s age of cybercrime, it is not a question of whether your organization will be targeted but when. Attacks are becoming more common, sophisticated, and dangerous every day. For example, Trend Micro reported that the banking industry experienced a 1,318% increase in ransomware attacks in 2021. In addition, the cost of a data breach also continues to rise every year. According to IBM’s Cost of a Data Breach Report 2021, the average cost of a single data breach increased from USD 3.86 million to USD 4.24 million, the highest in 17 years.

Therefore, it is no surprise that all types of organizations now invest significantly in bank-grade security. They employ security experts, implement anti-fraud programs, and encrypt data to boost their cyber security.

But, what does bank-grade security actually mean? Is it really robust and reliable enough to beat all cybercrime and cyberattacks in this day and age, or is it just hot air?

Understanding Bank-grade Security

Bank-grade security is a term used to describe technologies that meet or exceed specific cybersecurity requirements set by banks worldwide. To put it simply, it is adhering to the same security standards as your bank.

These requirements are designed to protect customer data from being compromised even if there is a breach within the organization’s network infrastructure or systems. 

Bank-grade security is concerned with current data security standards in the industry. For example, to be compliant and interoperable, certain industries must follow certain security procedures codified in various laws and subsidiary legislation. The best example is the Federal Deposit Insurance Corporation (FDIC) Laws, Regulations, and Related Acts that regulate the U.S. banking industry.

Another essential requirement is user data protection. Organizations that use bank-grade security comply with common global privacy laws and regulations such as:

Achieving Bank-grade Security

There are several interpretations of what “bank-grade security” means, but it usually entails:

  • Encrypting network traffic by using protocols like Transport Layer Security (TLS)
  • Utilizing strong customer authentication (SCA)
  • Other technical, administrative, and physical safeguards that depend on the particular industry

End-to-end data protection encrypts all traffic between servers to prevent interlopers from snooping on user information. When users sign up for online services, they will need their bank card number and an email address/username and password combination to access their account via mobile devices or desktop computers. There must be a high level of identification verification.

There are also developing standards, such as the Financial Grade API (FAPI) standard, which appears to be gaining some ground but is built on user authentication principles. FAPI is a bank-to-bank interface that aims to let financial institutions communicate securely with their trading partners.

Also read: Top Zero Trust Security Solutions & Software 2021

Is Bank-grade Security the Best Solution?

The bank-grade security concept has been around for some time now. However, despite all bank-grade security solutions being developed over the last ten years, cybersecurity breaches are still rising worldwide. For example, according to the Timeline of Cyber Incidents Involving Financial Institutions by the Carnegie Endowment for International Peace, there were 11 major cyber security incidents involving banks and financial institutions (including FinTechs) between January and November 2021 in North America. The methods employed included Man-in-the-Middle (MitM) attacks, phishing, credential stuffing, token skimming, and social engineering.

So why are companies spending more money on bank-grade security? Why do they think it will make them more secure when recent events show otherwise? Unfortunately, claiming to have bank-grade security is insufficient, and many organizations use this term as part of marketing to ease their customers’ concerns.

Security specialists, IT managers, and CTOs should not feel secure about the firms that handle their critical data stating they use bank-grade security. Cloud providers, SaaS companies, and other IT service providers must clarify what bank-grade security measures they use, prove it, and earn trust with consumers.

Furthermore, when most people use mobile phones to access internet services in today’s environment, IT service companies must go above and beyond by employing mobile app authentication and certificate pinning.

The most common implementation for mobile app authentication is to use a two-factor authentication method. One way is via one-time passwords where the user’s device pairs with an external security key or smart card, which contains a secret value that changes every 30 seconds. A second type of authentication involves using your phone as a bank-grade security layer that requires the user to authenticate their identity through an extra step when they log in to their service on their phone.

Certificate pinning protects against unauthorized access by only allowing devices with the correct digital certificates access.

It’s critical not just from a security standpoint but also as a matter of trust-building between IT service providers—and other organizations working in sectors where privacy is an issue—and customers/users who use these apps daily.

How Can You Tell If an IT Service Provider Uses Bank-grade Security?

When thinking about bank-grade security, users should ask IT service providers questions around three specific areas:

Transparency

Transparency tells you a lot about an organization. How open is an IT service provider with potential clients about how your data and clients’ will be handled? The policies and principles of data governance and trust should be clearly stated, including the purpose and goals of data processing, the kind of data being processed, and how it’s stored and safeguarded. 

A lack of transparency in this area is an immediate red flag. If bank-grade security concepts are being used, transparency should be bank-grade, too. In addition, does the organization have a public policy regarding third-party audits or assessments? Failure to have regular internal audits will increase the risk of a breach.

Data privacy

Evaluate your service provider on common data privacy principles, such as lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, and confidentiality. It is also essential to consider opt-out options, the right to be forgotten, and notification requirements in the event of a breach.

Compliance

In addition, evaluate service providers on how they meet regulatory requirements for bank-grade security. The service provider must hold cybersecurity certifications frameworks such as ISO27001, 27017, 27018, 27701, PCI DSS, CSA STAR, WebTrust, SysTrust, NIST (National Institute of Standards and Technology), COBIT (Control Objectives for Information and Related Technologies), or other industry-specific best practice standards. In addition, they must comply with data privacy laws in your jurisdiction.

Any concerns in any of the above areas should be a red flag and a sign that bank-grade security isn’t being prioritized.

Read next: Potential Use Cases of Blockchain Technology for Cybersecurity

The post Bank-grade Security: Is it the Ultimate Cybersecurity Solution? appeared first on IT Business Edge.

]]>
Data Privacy Forces a Tradeoff with Cybersecurity. Is It Worth the Risk? https://www.itbusinessedge.com/business-intelligence/data-privacy-cybersecurity-tradeoff/ Tue, 30 Nov 2021 16:10:52 +0000 https://www.itbusinessedge.com/?p=139850 Consumers want data privacy, but it could open them up to more cybersecurity risks. Find out how to mitigate them.

The post Data Privacy Forces a Tradeoff with Cybersecurity. Is It Worth the Risk? appeared first on IT Business Edge.

]]>
Consumers know that companies track their internet interactions, and they can often identify when it’s happening. According to Pew Research, 83 percent of consumers frequently or occasionally saw ads that appeared to be targeted based on their browsing history. While some customers are demanding better data privacy protections, organizations are finding out the hard way that increased data privacy forces a tradeoff with cybersecurity.

Why is Data Privacy Such a Big Deal?

Data privacy keeps personal information from falling into the hands of criminals that might use it to steal someone’s identity. Historically, it referred to information like names, addresses, and credit card numbers, but now, businesses are also storing users’ browsing histories and purchase information to improve their marketing campaigns. 

Businesses want to protect this data because it gives them a competitive advantage over other vendors in their industry. Plus, businesses that don’t take data privacy seriously may quickly alienate their customer base. The problem is, many people feel some of the tracking that companies do is invasive, and they aren’t getting full visibility into what those organizations are using their data for. 

Bryan Oliver, Senior Analyst at Flashpoint, says, “There is nothing wrong with browser fingerprinting or cookies, but the fact that third-party advertisers also use them to build a unique profile of your device and track your browsing activity for use in advertising can be a privacy concern.” This data, too, can fall into the wrong hands and cause problems for users. 

Oliver explains, “Now, because fingerprinting has become more common, threat actors are realizing that a username and password are no longer enough to compromise an account; therefore, malware has begun to steal all sorts of information about a victim in order to construct a fingerprint. Threat actors can use this data to emulate a victim’s device, mimicking its operating system, installed software, and other information to trick fingerprint detection systems.”

Also Read: Eight Best Practices for Securing Long-Term Remote Work

How Does Increased Data Privacy Affect Cybersecurity?

For some cybersecurity measures, increased data privacy procedures can actually make security more difficult. Device fingerprinting, for example, allows an organization to match user credentials with devices and locations, so a login from an unfamiliar device would trigger an alert. Sam Crowther, CEO of Kasada, explains, “The more data you can collect about someone and the more cookies you can put on a device, the better you can fingerprint, the better you can watch behavior, and the more data points you have to make a decision.” 

Crowther goes on to say, “It becomes problematic from that standpoint when you remove it because now there are two very, very valuable data points that are usually quite reliable to make decisions on that are gone. So legitimate customers look the same as an illegitimate hacker when they come in, in a browser that you can’t use either. The result of that is usually organizations trying to force other ways to identify like two-factor authentication or stronger passwords, which usually has some sort of negative impact on user experience.”

When user experience is bad, that’s when employees and customers look for workarounds for security measures and can create new vulnerabilities. 

How Can Companies Protect Users Without Tracking Data?

Some cookies are necessary to verify devices, but there’s a lot of data tracking that should be optional. Oliver says, “Cookies are generally needed for authentication, and fingerprinting is needed for anti-fraud, but neither poses much of a privacy concern if it’s only used for identification and authorization. By informing users about the ways in which cookies and fingerprint technologies might be used for advertising and giving them the option to opt out of these uses, companies can keep a positive user experience and protect data privacy while still mitigating fraud.”

Crowther says bot mitigation tools help ensure traffic is legitimate. “Bots take advantage of this exact feature, where it is more and more common that people don’t have device fingerprints because it makes them look more legitimate,” he says. “A bot is typically just going to be a new browser that’s never been used before. It has no cookies, and now that we’re in an environment where that’s more common, it definitely makes it much harder to distinguish.”

Should Businesses Prioritize Data Privacy?

Businesses should always prioritize data privacy, but they don’t have to sacrifice cybersecurity to do so. Organizations that collect data in order to improve their advertising need to give their customers opt-out options and only gather the information that will help them identify illegitimate traffic. They should also rely on other security measures, like multi-factor authentication and SSL certificates to keep their customers’ personally identifiable information secure. Additionally, a bot mitigation program can help differentiate between legitimate and illegitimate traffic to further protect users.

Read Next: Data Security: Tokenization vs. Encryption

The post Data Privacy Forces a Tradeoff with Cybersecurity. Is It Worth the Risk? appeared first on IT Business Edge.

]]>
New Focus on Data Privacy Directs Compliance Trends in 2018 https://www.itbusinessedge.com/it-management/new-focus-on-data-privacy-directs-compliance-trends-in-2018/ Mon, 17 Dec 2018 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/new-focus-on-data-privacy-directs-compliance-trends-in-2018/ Data privacy took center stage in 2018. Compliance regulations surrounding protecting confidential data aren’t new – laws like HIPAA have been around for years – but GDPR got everybody’s attention. Data privacy compliance has taken a prominent role in the security protocols across every industry, changing the way we think about privacy, who has to […]

The post New Focus on Data Privacy Directs Compliance Trends in 2018 appeared first on IT Business Edge.

]]>

Data privacy took center stage in 2018. Compliance regulations surrounding protecting confidential data aren’t new – laws like HIPAA have been around for years – but GDPR got everybody’s attention. Data privacy compliance has taken a prominent role in the security protocols across every industry, changing the way we think about privacy, who has to be responsible for privacy, and the trends that are emerging because of those new attitudes.

In the past, we associated data privacy compliances primarily with certain kinds of data or industries – medical records, financial, education, for instance. But now, primarily because of GDPR, everybody is thinking about privacy.

For example, today’s restaurants are juggling dozens of complex federal and state labor compliance issues, according to David Cantu, co-founder and chief customer officer of HotSchedules. While guest services, food quality management, menu development, efficient ordering, and marketing planning are essential tasks for successful restaurants, now labor compliance needs to be a priority.

“New York City’s Fair Work Week Law (effective November 2017), requires quick-service restaurants to determine work schedules two weeks in advance with various fines being imposed when shifts are changed thereafter,” Cantu explained. “One trend we’re currently seeing is expensive penalty costs as businesses adapt to last-minute staff changes causing franchise groups in New York City to challenge the law. Given the last-minute changes and penalty fees, businesses need a seamless solution to manage compliance and save valuable time as compliance law evolves.”

Legislation Introduced Everywhere

First and foremost among those trends is the attempt to initiate an American GDPR. (And it isn’t just Americans who want data privacy protections. Countries from Canada to Australia have introduced some sort of privacy legislation.) All 50 states have some sort of legislation at least introduced, if not passed, that addresses either privacy, protection, or notification legislation, and in some cases, all three. There’s a push to get legislation on a federal level. Whereas data protection from a cybersecurity perspective has languished in the halls of Congress and state capitols for years, protecting consumer personal information seems to be a high priority.

Consumer Privacy Comes First

Protecting PII is another compliance trend we’ve seen in 2018. “The focus of compliance has shifted from protecting the organization and its investors to protecting individuals,” said Zack Shulman, compliance research senior engineer with LogRhythm. “As organizations are becoming more and more data-driven, this is at the forefront of most GRC programs – or should be if it isn’t.”

Also changed is the vendor relationship. “Vendors are no longer separate entities from the organizations that contract them,” explained Shulman. “A breach to a vendor will most likely result in as much – or more – ill will to the parent organization as if they themselves had been breached, and GRC programs and common frameworks are taking this into account. As a result, parent organizations are building out more robust vendor management practices.”

Speaking of parent organizations, this shift in protection has led to individual organizations improving their privacy policies well beyond anything written down in United States law, added Josh Mayfield, director of security strategy at Absolute. “Private companies, in the sense of non-government, scrapped their old standards and rules for a more robust and user-benevolent way,” he continued. “In 2018, while Washington was busying itself with reelections, scandals and gaffes, organizations in the private sector were sprinting past legislators who are still behind on a federal standard for privacy.”

Rethinking Compliance’s Role

Organizations’ executives are finally realizing that compliance is not a part-time job, according to Shulman, and there must be a significant investment in compliance to satisfy requirements and gain value from it. “We started to see position requirements and even title designations built into legislation and regulations.”

A recent survey from Hochschule fuer Technik und Wissenschaft Berlin, University of Applied Sciences, and SAP found that today, compliance managers need to reshape their expertise in order to meet these new data privacy regulations.

“To adhere to this, we’ve seen compliance managers moving towards intelligent technologies, using artificial intelligence to identify potential patterns of fraud and to manage regulatory/trade compliance to reduce the risk of penalties and fines,” said Henner Schliebs, global vice president ERP and Finance Solutions with SAP.

The side effect of keeping pace, Schliebs added, is that the technology is now cannibalizing the role, and threatened to be overtaken by AI. “So far, this has shifted the skill set of compliance managers, which now requires new data-related skills, the ability to work in networked structure, competencies for using the financial expertise in a new context, such as cyber risk, and ethical thinking and behavior.”

Increasing the Investment in Security

This new focus on compliance has changed the way organizations need to consider cybersecurity. Compliance strengthens organizations’ security postures by requiring minimum standards to meet regulations surrounding privacy. And this will lead to improved investment in security and training.

“As compliance teams strive to set relevant cyber-security policies that don’t slow down the

business, there is a trend toward deeper and more technical training,” said Altaz Valani, research director at Security Compass. “This goes beyond traditional application security awareness training and gets into the nuts and bolts of how SQL injection works, for example.”

This refocus on security in terms of compliance could be a good thing for companies that struggle to get leadership to buy in to cybersecurity, added George Wrenn, CEO and founder of CyberSaint Security. “With technology innovation moving at such a breakneck speed, the adoption of inherently insecure technologies has already caused headaches for many enterprises. Security is quickly becoming a differentiator in the market for new technologies.”

With the number of high-profile data breaches affecting all types of consumer personal information, the need for improved privacy compliances is necessary. In 2018, we got our first look at the way compliance is trending.

Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba

The post New Focus on Data Privacy Directs Compliance Trends in 2018 appeared first on IT Business Edge.

]]>
Bringing Cybersecurity and Privacy Together https://www.itbusinessedge.com/it-management/bringing-cybersecurity-and-privacy-together/ Fri, 02 Nov 2018 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/bringing-cybersecurity-and-privacy-together/ While attending MPower 2018 a few weeks ago, I had the chance to sit down with Tom Gann, chief public policy officer and head of government relations with McAfee. Gann and I had begun an interesting conversation about privacy laws during a pre-conference reception, and I was thrilled to have a chance to continue the […]

The post Bringing Cybersecurity and Privacy Together appeared first on IT Business Edge.

]]>

While attending MPower 2018 a few weeks ago, I had the chance to sit down with Tom Gann, chief public policy officer and head of government relations with McAfee. Gann and I had begun an interesting conversation about privacy laws during a pre-conference reception, and I was thrilled to have a chance to continue the conversation. Data privacy issues have become a pet project of mine over the past year, ever since I began to learn and write more about GDPR, and here was my chance to talk to someone who sees data privacy regulations from both the security side and from the government relations side.

One thing I wanted to talk about was the intersection of privacy and security. Gann told me there is a misconception that privacy and security are in conflict with each other, and that’s not true. Privacy purists often think that cybersecurity tools track a lot of personal data and invade privacy.

That said, he continued, what we’re seeing is an evolution of the privacy community, driven by significant data breaches. Lots of PII has made its way into the Dark Web, thanks to some of the huge data breaches of the past few years, so much so that the price of Social Security numbers has dropped considerably. He added:

I think what the privacy community is seeing is that unless organizations are obligated to implement security, the fight to protect privacy won’t be won. That’s been a shift over the last five or so years.

What that means for the future is a much better relationship between the privacy community and the security industry, and this will spill over into rules and regulations. What we should strive for is a balanced outcome of federal laws that are designed to level the privacy playing field when it comes to consent but at the same time obligating organizations of all sizes to take privacy seriously.

GDPR is a good start, said Gann, because it provides a good roadmap on how to think from a privacy point of view. The NIST cybersecurity framework is designed to engineer in the steps needed to improve privacy. Those are policies that are building a foundation. It’s then up to the organization to implement security that can track data as it comes in and how it is used, and then build a security tool that meets both security and privacy obligations. Gann stated:

We think ultimately the shift will continue and there is an important evolution going on whereby security and privacy advocates can come together better.

Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba

The post Bringing Cybersecurity and Privacy Together appeared first on IT Business Edge.

]]>
Why Aren’t the Data Privacy Laws We Have Now Enough? https://www.itbusinessedge.com/security/why-arent-the-data-privacy-laws-we-have-now-enough/ Fri, 28 Sep 2018 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/why-arent-the-data-privacy-laws-we-have-now-enough/ I’ve written a lot over the past months about why we need to have federal law that is the equivalent to GDPR. States are stepping up on data privacy, but we may need something more. Most security folks I talked to agreed with me. We need a federal law. But I said most, not all. […]

The post Why Aren’t the Data Privacy Laws We Have Now Enough? appeared first on IT Business Edge.

]]>

I’ve written a lot over the past months about why we need to have federal law that is the equivalent to GDPR. States are stepping up on data privacy, but we may need something more. Most security folks I talked to agreed with me. We need a federal law.

But I said most, not all. Gabriel Gumbs, vice president of Product Strategy, STEALTHbits Technologies, takes a contrarian view on the need for a federal law. We don’t need one because there are already a lot of other privacy protections enacted right now, even if those regulations are patchworked and not all encompassing.

He’s right. A number of current federal privacy laws address very specific areas of concerns or groups of people. For example, COPPA exists to protect the privacy of children online, FERPA protects the privacy of students, HIPAA protects the privacy of patient data, and the FTC enforces consumer data privacy, to name a few.

Do you see a trend there with these privacy laws already on the books? They are all overseen by a specific agency within the federal government, not by a single government authority. That’s because we don’t have a data privacy authority, whereas the EU countries all do. Without one, it would be very difficult to enforce a federal privacy law. As William Kovacic, a former general counsel, member and chair of the FTC during the Barack Obama and George W. Bush administrations, told the Washington Post on data privacy lawmaking:

In many ways we have an antiquated policymaking infrastructure. It’s a patchwork of controls that have no unifying principles and no unifying institutions to coordinate policy.

Another issue is that we might be generating too much data. Think of all the IoT devices out there and the amount of data produced. Who is responsible for all that information? A discussion from a Brookings article asks a valid question around that point. Our fitness trackers and smart watches, which it used as an example, hold a lot of personal and medical data, the kind of information that is in part covered by HIPAA and would be covered, theoretically, under a data privacy law, depending on which company held the data. But, the article continued, no matter who holds it, it is still all the same information:

It makes little sense that protection of data should depend entirely on who happens to hold it. This arbitrariness will spread as more and more connected devices are embedded in everything from clothing to cars to home appliances to street furniture. Add to that striking changes in patterns of business integration and innovation — traditional telephone providers like Verizon and AT&T are entering entertainment, while startups launch into the provinces of financial institutions like currency trading and credit and all kinds of enterprises compete for space in the autonomous vehicle ecosystem — and the sectoral boundaries that have defined U.S. privacy protection cease to make any sense.

So maybe we don’t need a federal law. These are certainly points to think about, and Gumbs summed it up this way in an email comment:

One overarching federal data privacy law is not necessary in my opinion and a working group of public and private agencies capable of helping both government and private businesses improve their data security practices would be far more beneficial.

Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba

The post Why Aren’t the Data Privacy Laws We Have Now Enough? appeared first on IT Business Edge.

]]>
Should the U.S. Have a Federal Data Privacy Law? https://www.itbusinessedge.com/security/should-the-u-s-have-a-federal-data-privacy-law/ Wed, 26 Sep 2018 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/should-the-u-s-have-a-federal-data-privacy-law/ If the entire European Union can come together to enact serious data privacy regulations for its citizens, why can’t the United States Congress do the same? Right now, U.S. data privacy laws are scattershot among individual states, covering different things and, like the state-based data breach notification laws, only add more confusion. Having a national […]

The post Should the U.S. Have a Federal Data Privacy Law? appeared first on IT Business Edge.

]]>

If the entire European Union can come together to enact serious data privacy regulations for its citizens, why can’t the United States Congress do the same? Right now, U.S. data privacy laws are scattershot among individual states, covering different things and, like the state-based data breach notification laws, only add more confusion. Having a national law would provide the same protections to everyone.

On the other hand, GDPR has only been live for a few months, so we don’t know what the long-term implications will be. Or maybe we should wait to see how well the state privacy acts function before we take on a federal mandate.

I asked security and privacy professionals for their opinions: First, do we need a data privacy law at the federal level and second, if we do, what do you think should be in that law?

Almost everyone agreed that yes, a law passed by Congress is necessary.

“After numerous large-scale breaches and the well-publicized misuse of consumer data, we are well past the time for comprehensive data privacy protections for all U.S. citizens,” Michael Magrath, director, Global Regulations & Standards, OneSpan, said, adding that the data privacy law should apply to online and offline data.

Callum Corr, data analytics specialist from ZL Technologies, agreed that we need to do this, but he’s concerned about who would take the lead in writing that bill. “Big tech giants have been pushing politicians and commissioners alike to allow them to come together and write a policy that is going to be favorable to the largest companies in the industry,” he said. In fact, a number of big tech firms plan to introduce a data privacy framework to the Senate.

“If we allow the tech leaders to write the law that is supposed to regulate them, then it defeats the purpose,” Corr added. “The regulation needs to be consistent and therefore, it has to be federal.” State laws are set up to fail because they all have boundaries attached, and the flow of data today has no boundaries. A federal law would address that.

There Is Something Started

There is one pending bill, S.2289, which was introduced to the Senate in January, Pravin Kothari, CEO of CipherCloud, pointed out. This bill calls for the creation of an Office of Cybersecurity within the Federal Trade Commission (OCS-FTC, which would create, issue, and distribute regulations that require covered business entities (predominantly credit bureaus) to provide a complete overview of the technical and organizational security measures they have in place.

But, while this is a start, we need to proceed with caution. “The legislative environment is uncoordinated and generally ineffective. Look at HIPAA and PCI, which have been in place for long periods of time, but have not stopped health care organizations or financial institutions from becoming victims – regardless of the requirements and penalties,” said Kothari.

Empower Individuals and Their Right to Data Privacy

So we need the federal law, but what should it cover? The federal law should ideally empower individuals, said Rishi Bhargava, co-founder at Demisto. That includes following rights:

  • The right to know what data is being collected by a data controller/processor
  • The right to deny the collection of that data
  • The right to ask for removal of that data at any time
  • The right to be informed about any major breach that compromises their data

We Need Accountability

Despite the industry-based federal privacy laws enacted now, Ali Golshan, CTO and co-founder at StackRox, explained, we lack overall accountability for times when consumer data is lost or mishandled. However, he added, before we can have accountability, we need to figure out how to make the current compliances work with a broader privacy act. And any privacy law will need to include transparency of data management across organizations of all sizes.

Privacy Needs Protection

You can’t think about data privacy without considering data protection. Any U.S. federal data privacy legislation should include a requirement, not a recommendation, that multifactor authentication must be used to access systems containing personal information, Magrath suggested, and should leverage the NIST’s Digital Identity Guidelines v1.1 and future revisions.

Learn from GDPR

A lot of organizations have already taken steps to be in compliance with GDPR. Also, as states begin to enact their own laws, businesses will need to add more data privacy layers. We don’t need to build a U.S. law from scratch, said Nathan Wenzler, chief security strategist at AsTech.

“We could start with GDPR as a framework, since it’s already affecting U.S. companies who collect and use personal data for EU residents, and work to modify and improve upon it for our own purposes,” Wenzler explained. “The intent of GDPR satisfies many pieces of a user-centric data privacy and protection effort.”

GDPR is hardly perfect, of course, but a federal data privacy law similar to GDPR would provide users some amount of recourse to control how, where and it what manner their personal data is used, which in and of itself would be a huge step forward in our data privacy efforts.

Overall, what we want from a federal data privacy law is something that will help businesses keep consumer data secure and private as they work across state and international borders, TrustArc Chief Data Governance Officer Hilary Wandall said, adding, “A U.S. national standard that applies across industry sectors will provide a stronger position and voice for U.S. business and policy interests in the international privacy regulatory dialogue.”

Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba

 

The post Should the U.S. Have a Federal Data Privacy Law? appeared first on IT Business Edge.

]]>
Tech Companies Preparing Framework for Federal Data Privacy Legislation https://www.itbusinessedge.com/it-management/tech-companies-preparing-framework-for-federal-data-privacy-legislation/ Tue, 25 Sep 2018 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/tech-companies-preparing-framework-for-federal-data-privacy-legislation/ It looks like some of the largest tech and communication companies – Google, Apple, Amazon, Twitter, AT&T – will be meeting with Congress to discuss data privacy. The point of the hearing is to discuss their privacy services, but I’m seeing some articles that at least some of these companies intend to present ideas for […]

The post Tech Companies Preparing Framework for Federal Data Privacy Legislation appeared first on IT Business Edge.

]]>

It looks like some of the largest tech and communication companies – Google, Apple, Amazon, Twitter, AT&T – will be meeting with Congress to discuss data privacy. The point of the hearing is to discuss their privacy services, but I’m seeing some articles that at least some of these companies intend to present ideas for federal data privacy regulations.

I have to admit that I was surprised when I heard it. Many of these same companies have come out against the California Consumer Privacy Act. In a Security Boulevard article, Terry Ray, chief technology officer at Imperva, made the point that to tech companies, data is more valuable than gold, adding:

It’s more like uranium — extremely valuable, yet radioactive. Controlling this flow of information is difficult for any type of organization, but especially for companies such as Google and Facebook, where the sharing of data is a prime commodity.

But here they are, preparing to go to Congress with frameworks to guide data privacy legislation (at least Google and Apple have that intent), with The Hill adding:

The set of proposals is designed to be a baseline for federal rules regarding data collection. Google appears to be the first internet giant to release such a framework, but numerous trade associations have published their own in recent weeks.

A lot of people in the security world think this is a step in the right direction. In an email comment, for example, Harold Byun, vice president of products and marketing at Baffle, told me that he believes that yes, we should have a national data privacy act, and here’s what he thinks should be included:

  • Establish a data privacy bureau that would be responsible for defining requirements and standards and liaising with businesses.
  • Establish a personal records opt-in mechanism that gives users methods to authorize sharing with entities.
  • Impose financial penalties for non-compliance and data breaches and requirements on disclosure.

However, not everybody is thrilled that tech companies are suddenly not only on board but taking the lead in data privacy legislation talk. The Electronic Frontier Foundation pointed out that historically, tech companies have stood in the way of consumer privacy legislation and, as we saw in California, don’t support legislation when it is proposed or passed. The EFF also argues that if you have a Senate hearing about data privacy, you need to have consumer privacy advocates at the table because their concern is that if you let only tech companies push the legislative frameworks, it could end up weakening some strong state-based bills.

It’s clear that some action is necessary, but as Mounir Hahad, head of Juniper Threat Labs at Juniper Networks, told me in an email comment, let’s slow down a little bit:

Before talking about the need for a U.S. regulation around data privacy, we must first understand that in this deeply connected global economy, the EU’s comprehensive GDPR regulation affects the vast majority of U.S. businesses. Most U.S. businesses, and some branches of government, do indeed handle EU citizens’ data and are therefore required to comply with GDPR. An additional U.S. regulation would just close the gap on the businesses that are truly local, as well as most branches of the federal and local governments.

Before engaging in any new regulation, it is best to watch and learn from the implementation of Europe’s GDPR as we already know of some flaws that need adjustment.

Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba

The post Tech Companies Preparing Framework for Federal Data Privacy Legislation appeared first on IT Business Edge.

]]>
Apple Adds Mandatory Privacy Policy to the App Store https://www.itbusinessedge.com/applications/apple-adds-mandatory-privacy-policy-to-the-app-store/ Fri, 21 Sep 2018 00:00:00 +0000 https://www.itbusinessedge.com/uncategorized/apple-adds-mandatory-privacy-policy-to-the-app-store/ The concern about privacy, especially in the digital age, has been around for a while. It’s what led to compliance requirements like HIPAA. But it’s amazing what a few high-profile incidents (Facebook and Cambridge Analytica) and high-profile regulations (GDPR) can do to make data privacy a front-of-the-line issue. Other countries are jumping onto the data […]

The post Apple Adds Mandatory Privacy Policy to the App Store appeared first on IT Business Edge.

]]>

The concern about privacy, especially in the digital age, has been around for a while. It’s what led to compliance requirements like HIPAA. But it’s amazing what a few high-profile incidents (Facebook and Cambridge Analytica) and high-profile regulations (GDPR) can do to make data privacy a front-of-the-line issue. Other countries are jumping onto the data privacy legislation bandwagon, as are an increasing number of states. And now we’re seeing data privacy become a higher priority for individual businesses.

In June, Apple announced that it will add a data privacy policy for apps and app updates, effective October 3. Although the official news from Apple states that this policy is for new apps, I did note that other announcements and articles about the privacy policy will apply to all apps offered through the App Store. I hope that is the case because apps are a serious privacy weak spot.

We know that apps gather all sorts of information from your device, all of which you essentially agree to the moment you hit the install button. Tech.co pointed out some of the worst offenders for data gathering. You might be willing to put up with an app using your personal data for dating and social media sites, but for a flashlight app? But yes, it was found that at least one flashlight app was not only using your personal information taken from your phone, but also selling it. (Luckily, newer phones have built-in flashlight functions, and when I noticed that on my new phone, my old flashlight app was quickly uninstalled. But probably not before damage had been unknowingly done.)

With that in mind, I hope Apple is taking into consideration those older apps that may not be updated in the near future. After all, Apple has long prided itself on being a leader in privacy and security, and as some reported, this update is long overdue.

Apple’s new policy will require app developers to share how data is collected and how that data is used. And the policy only applies to applications, not to Apple itself because iPhones alone collect a lot of information and as USA Today stated:

The company does admit that it freely collects information about what music we listen to, what movies, books and apps we download, which is “aggregated” and used to help Apple make recommendations. Apple says it doesn’t share this information with outside companies, either and notes that it doesn’t know the identity of the user.

Another point to note here is that the new privacy regulation may not be as much to protect users but to protect Apple, as TechCrunch noted:

Apple’s new requirement, therefore, provides the company with a layer of protection – any app that falls through the cracks going forward will be able to be held accountable by way of its own privacy policy and the statements it contains.

But it’s a step in the right direction. The push for data privacy has to be a joint effort between government and business, and when one entity isn’t doing enough, it’s good to see another step up.

Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba

 

The post Apple Adds Mandatory Privacy Policy to the App Store appeared first on IT Business Edge.

]]>