As the coronavirus crisis affects the world, there has been a sharp rise in working from home and, as a result, the use of video conferencing platforms such as Zoom. But Zoom has also come under fire for numerous privacy and security issues.
As reported by Help Net Security, some of these issues include:
All of these issues raise the question of how safe it is to use Zoom. However, it is important to note that since coming under increased scrutiny in the last few weeks, Zoom has been working to address many of these issues, as Help Net Security has reported:
Most importantly, Zoom Video Communications’s CEO Eric Yuan publicly pledged that, for the next 90 days the company will temporarily stop working on new features and shift all their engineering resources to focus on trust, safety, and privacy issues.
He apologized for the company failing short of the community’s privacy and security expectations, said that many of the issues were due to the fact that Zoom was built primarily for enterprise customers (large institutions with full IT support).
You can read the full article from Help Net Security here.
It’s a positive step to see a company working towards better security and privacy measures, but although Yuan has argued they “did not design the product with the foresight that, in a matter of weeks, every person in the world would suddenly be working, studying, and socializing from home”, the problems should nonetheless have been addressed before.
The chief question here is whether it’s safe to use Zoom. You should always be careful about using any platform on which you can share data, and on the whole, there are more secure services available.
Are you concerned about data privacy issues during the coronavirus crisis? Contact us today to get our expert, professional advice.
The Royal College of Psychiatrists has called for social media data to be handed over to academics in order to protect children and young people who are at risk of suicide.
By studying the content that is being viewed, the hope is that new research could help protect users from material that could harm them.
According to an article from The Guardian:
“We will never understand the risks and benefits of social media use unless the likes of Twitter, Facebook and Instagram share their data with researchers,” said Dr Bernadka Dubicka, chair of the college’s child and adolescent mental health faculty. “Their research will help shine a light on how young people are interacting with social media, not just how much time they spend online.”
Data passed to academics would show the type of material viewed and how long users were spending on such platforms but would be anonymous, the college said.
That the data would be anonymised could potentially make this course of action permissible under GDPR, but this data is nonetheless extremely sensitive. Care would have to be taken to ensure that it was shared with academics legally and that users were sufficiently protected.
The idea has received support from other sources as well. The Guardian goes on:
NHS England challenged firms to hand over the sort of information that the college is suggesting. Claire Murdoch, its national director for mental health, said that action was needed “to rein in potentially misleading or harmful online content and behaviours”.
She said: “If these tech giants really want to be a force for good, put a premium on users’ wellbeing and take their responsibilities seriously, then they should do all they can to help researchers better understand how they operate and the risks posed. Until then, they cannot confidently say whether the good outweighs the bad.”
Click here to read the full article from The Guardian.
With the government currently planning measures to make the internet a safer place for users, including setting up an independent regulator and placing a duty of care on online companies, the Royal College of Psychiatrists may well get what they want here.
But with data privacy being a major concern here, there is also likely to be objections. According to the BBC, civil rights group Big Brother Watch stated that users should be “empowered to choose what data they give away, who to and for what purposes”, and that young people should not be treated like “lab rats” on social media.
E3 (the Electronic Entertainment Expo) is one of the biggest events in the calendar for video gaming – but it’s recently been revealed that a data breach at this year’s event left data exposed for over 2000 people.
This E3 data breach came as a result of a spreadsheet that was published on the event’s website and made publicly available.
As reported by Kotaku:
The Entertainment Software Association, the organization that runs E3, has since removed the link to the file, as well as the file itself, but the information has continued to be disseminated online in various gaming forums. While many of the individuals listed in the documents provided their work addresses and phone numbers when they registered for E3, many others, especially freelance content creators, seem to have used their home addresses and personal cell phones, which have now been publicized. This leak makes it possible for bad actors to misuse this information to harass journalists. Two people who say their private information appeared in the leak have informed Kotaku that they have already received crank phone calls since the list was publicized.
You can read Kotaku’s full report on the story here: https://kotaku.com/e3-expo-leaks-the-personal-information-of-over-2-000-jo-1836936908
While the ESA moved quickly to plug this breach and limit the danger to users, they made a crucial mistake. They deleted the page containing the link to the spreadsheet – but after the story broke in the news, it was found that the spreadsheet itself was still accessible.
This E3 data breach could potentially be very costly for ESA. With journalists attending the event from all over the world, they could find themselves subject to investigations and penalties under multiple different data protection laws, including GDPR.
Kotaku also updated their report to note that ESA provided the following statement:
In the course of our investigation, we learned that media contact lists from E3 2004 and 2006 were cached on a third-party internet archive site. These were not files hosted on ESA’s servers or on the current website. We took immediate steps to have those files removed, and we received confirmation today that all files have either been taken down or are in the process of being removed from the third-party site.
We are working with our partners, outside counsel, and independent experts to investigate what led to this situation and to enhance our security efforts. We are still investigating the matter to gain a full understanding of the facts and circumstances that led to the issue.
But with the data already out there, the damage has likely already been done.
Contact us straight away if you’re concerned about the possibility of a data breach at your organisation. Under GDPR, the fines can be severe: up 20 20 million euros or 4% of annual turnover per breach!
Is your smartphone listening to your conversations? I’ve had a lot of creepy experiences lately, where a verbal conversation I’ve had with someone is suddenly being reflected in the adverts being served up to me by my Android smartphone. For example, someone asked me about who Help For Heroes were, so I explained it – and then what was the very next advert to show up on my phone, after never being considered or searched for before using that device or any other?
Mental health support for ex-servicemen. Just one of many. So I started digging to find out more about how this is happening – and whether anyone genuinely has the rights to listen in to my conversations.
As it turns out, it’s not a conspiracy theory. It’s been discovered that your smartphone really is listening in and collecting data about you. Hundreds of smartphone apps are using a technology from a company called Alphonso, which accesses a phone’s microphone to collect advertising data.
Alphonso’s software seems to be particularly focused on a user’s TV-watching habits. It listens in on the phone’s local environment, and receives audio samples which it compares to commercial content. If a match is found, it will then attempt to deliver targeted ads for that same content to your phone.
Did these apps genuinely get our specific, informed, granular consent to do this? And is this consent retractable? If not, then it would appear that this kind of data collection doesn’t conform to GDPR.
If you want to prevent your smartphone listening to your conversations, there are several things you can do to safeguard your data. Most crucially, you need to control permissions for your smartphone’s microphone:
- For iOS, go to Settings -> Privacy -> Microphone
- For Android, go to Settings -> Apps -> App Permissions
So I changed the permissions of which apps could use my phone’s microphone. Now the ads I see are stuck in a timewarp – still trying to flog the same things they were a month ago. So, you win some… you lose some!
Want to find out more about GDPR and data protection? Click here for all the information you need…