"At a time when our lives are lived increasingly online, cybersecurity training is more important than ever. It is not just the risk of cyber threats, which can disrupt, damage, or even destroy data and physical assets, but also the links between cybersecurity and cyberbullying that pose a very personal threat. Increasingly, what we share online can be used against us. Cybercriminals and cyberbullies have one thing in common: they know how to tap into human emotions and motivations. By training teams on cybersecurity, we can make positive behaviours everyone’s responsibility, and help everyone understand how they can play their part, both from a security and cyberbullying perspective"
Neil J Frost (COO Bob's Business)
Online. Internet. Apps. Live streaming. Gaming. Social Media. Emails. Texts. Phone. Laptop. Desktop. Tablet. Phone. Cyberspace. Cybersecurity. Cyberstalking. Cybercrime. Cyberbullying. Deepfaking. Phishing. Spearphishing. Catphishing. Cancel culture. Online hatred. Trolls. The list goes on…
In evolutionary terms, we’re still learning the etiquette of behaviour online, and although there are opportunities for great collaboration and good, there are also behaviours being exhibited at an individual and societal level giving great cause for concern.
Cyberbullying is the most rapidly growing type of bullying, and not just for children. It also infiltrates our workplaces, particularly as the boundaries between home and work life become increasingly blurred.
Social Media - for better, for worse
At our conference in November 2020, Pete Trainor, digital technologist, author and digital anthropologist quoted Cindy Gallop who recognised some of the insecurity people of all ages face online:
“the young white male founders of the giant tech platforms that dominate our lives today are not the primary targets - online or offline - of harassment, abuse, sexual assault, violence and rape therefore they did not and do not proactively design for these issues. Those of us who are most likely at risk every day; women, people of colour, LGBTQ, the disabled, we tend to design safe spaces and experiences.” (Cindy Gallop)
Pete commented that “the act of bullying someone either intentionally or unintentionally is now intrinsically baked into the very fabric and code of the platforms that we choose to communicate through, it's now part of society in a way that perhaps wasn't there when we were growing up, when we were children.”
There are ongoing discussions and cases about where the responsibility lies to create safer online spaces. Is it the people that that produce the content or force the messaging, or is it up to the platforms themselves to regulate this environment better? How do we decide who is responsible? The platforms are pushing back and saying it's the responsibility of the parents, of the individuals, of the groups, of the colleagues or whoever, to regulate their own behaviour. The High Courts and the people bringing cases are saying that as the platforms provide the tools, so they need to have some accountability here.
We also hear arguments about the right to freedom of speech and expression, and it is very important for people to be able to express political differences, different views, humour and banter. They’re part of humanity, otherwise we would be a homogenous society. However, there is too often a lack of patience, and an inability or lack of desire to debate and be curious about other views, leading to personal attacks or a shutdown of another’s views – the cancel culture, often with many others joining in. That’s when cyberbullying is at its most horrific.
One of the problems with social media networks is if anything breaches their codes, it's relatively easy for them to recognise and (eventually!) take it down. However, if things don't breach their codes, they're not liable to take it down. Bullying can be subtle enough that it doesn't breach the codes and yet it's seen as bullying, and that extends the problem. In a recent case, a client reported racism on LinkedIn and was advised that it didn’t breach their code. They were no longer able to view the comment, but it was not removed from their feed, and anyone who knew that person could instantly recognise it as racism.
So, as we use social media more and more in the workplace, then it is becoming increasingly complicated to police this area, and we need to improve individual accountability and education around what's kind and good and right. Part of that education is to recognise cyberbullying and how easily it can move to cyberstalking and cybercrime, and adopting cyber secure practices.
“Online hatred is so commonplace that the majority of incidents go unreported. According to British government data, 1,605 hate crimes occurred online between 2017 and 2018, a 40% increase on the previous year. But the Home Office admits this figure is probably a gross underestimate.”
Home Office “Hate Crime, England and Wales, 2017/18”
What are the solutions?
What about legislation? Action is being taken by the UK government to help make the internet a safer place for both individuals and businesses. The UK Council for Internet Safety was introduced in 2019, and its scope is to improve online safety for everyone in the UK. Priority areas of focus will include online harms experienced by children such as cyberbullying and sexual exploitation; radicalisation and extremism; violence against women and girls; hate crime and hate speech; and forms of discrimination against groups protected under the Equality Act, for example on the basis of disability or race.
It has also played a role in the development of the Online Harms White Paper introduced April 2019, with a full consultation response published in February 2020.
The online harms interim codes of practice provide voluntary guidance for companies to help them mitigate the range of risks arising from online terrorist content and activity and child sexual exploitation and abuse, ahead of the online harms regulator becoming operational.
The new regulatory framework for online safety that this White Paper talks about will set clear rules to help companies make sure users are safe. At the same time, it will protect freedom of expression. This is especially important when looking at information or activity that is harmful but not criminal.
The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services. Compliance with this duty of care will be overseen and enforced by an independent regulator.
As technology advances at such great speed, so too does our need to create agile solutions. Is Artificial Intelligence the answer? There are some really interesting examples of some behaviours are being tackled through Artificial Intelligence (AI). Moonshot is using technology to disrupt violent extremism. The company uses a mixture of software and human judgment to identify individuals on the internet who appear interested in extremist propaganda. They then attempt to serve them counter-messaging. Another example is HateLab. Their aim is to provide a more accurate picture of hate speech across the internet. It is the first platform to use AI to detect online hate speech in real time and at scale.
So I asked Pete Trainor whether AI could be effective in counteracting cyberbullying.
“The big issue we have with AI is that AI is trained on data. That data is inherently laden with bias, so you're basically training AI systems to look for bias from biased data, so there's a massive problem with what it's looking for. The other thing, and again I’m from an AI background so I’m a huge advocate for automation of systems and services, but the problem we have with bullying is nuance, and machines that, like they look for absolutes and they're not very good at picking out the nuance in natural language processing, which is the kind of AI that we'd be talking about for looking up or highlighting bullying style behaviours.”
What about using technology to help in the recording and reporting of bullying, harassment and cyberbullying? Online reporting and the use of apps is on the increase. In some organisations, internal systems and a lack of psychological safety can make it very difficult for people to report, and technology can make this much simpler. However, there is also the need to consider culture and resourcing. What will happen to the reports, which could cover a multitude of incidents and behaviours? Are organisations putting the necessary developments alongside, at both a human and a system level to truly support and resource the problem and be able to take action?
Technology can be phenomenal for helping to identify a problem, a pattern, a trend, but it does not replace human interaction. We can use it to help protect ourselves. We need to be aware of how we use apps and IT and protect ourselves against risks by looking at our privacy settings, looking at who we interact with online, looking at how we share information. We can proactively use it to flag areas of concern; try and identify areas, departments, individuals in the workplace that might be causing risk, or at risk, because of their behaviour, for example, but human interaction would need to be used to support individuals towards a resolution, particularly in relational issues.
The role of Cybersecurity
Cyberbullying and cybercrime are closely linked, and therefore cybersecurity is a key element of tackling cyberbullying at work. Cybersecurity training helps to create a healthy cyberculture.
So, how cyber safe are you? How cybersafe is your business?
In this world where so much work takes place online, and the boundaries between work and home are increasingly blurred, how safe is your information; your identity; your reputation? How can we keep employees safe online? As cyberbullying becomes a growing concern, then linking your prevention approaches to your cybersecurity practice makes perfect sense. We need to check our own behaviours at both individual and organisational levels. For example:
How much personal information are you sharing online?
How safe is the data held by your business?
What information can be accessed - about your employees, customers, suppliers?
How are you using and monitoring social media?
Lucy Howard is the Digital Communications Officer with the charity Bullies Out and specialises in cyber bullying information leakage and digital footprints.
“So with a shift to remote working in 2020, what we are finding now is that our personal, our professional lives are very much blurring together and so with that comes a need to protect ourselves even more so against inadvertent information leakage.”
Here’s a little of the advice that she shared with us at the United Against Workplace Bullying conference in 2020: keeping work and home separate; not allowing clients/customers to see personal information including your home – check what can be seen in the background of video calls; make sure you remain professional even if you’re using your own devices for work. Be aware that bantering can turn to bullying especially without the nuances that come form non-verbal behaviour or misunderstandings from interpretation of words alone.
More businesses are now asking people to use apps on their personal devices. If something inappropriate is posted on WhatsApp, or other social media, even if you haven’t read it, you can be liable if you don’t report it. When you share photos, remember that settings are based on the receiving device and you lose control of it as soon as you press send.
This is why it is so important to include behaviours in your social media policy and making it clear how they link to cyberbullying, harassment, and hate crime. Make sure you are clear about email etiquette.
Neil Frost, Chief Operating Officer at Bob's Business which is an educational awareness company that delivers cyber security training online, also offered some key insights in this area.
He advised that from a security point of view, you should always ask yourself who are you connecting with, and why, because it's common knowledge that data is the most valuable asset in the world, now more so than ever before. Historically, the technology was not historically designed to have people's security in mind; how we interact on those platforms. 90% of security breaches for big organisational enterprises are caused from human related error and criminals focus on that because of the predictability of people's behaviour, making it easy to target. The link between cybersecurity and cyberbullying is around emotion, and one of the key things here is our willingness to share information online, because as people we naturally have an instinct to trust people, which is very easily socially engineered online.
You may think that you are at a low risk if you don’t actively use social media, but others around you may be sharing content about you, particularly if you have a public facing role. Whatever the level of usage, once the information is online, you have lost control of it. So, keep things in in boxes – work, play, friends - so that you can always have control of that from a security perspective, and then if something becomes infected or becomes an issue, it's very easy to shut down that box while still maintaining the others and your presence online.
We also asked Neil how easy it is for people to close down their identity, or if someone needed quite specialist technical skills to create the anonymity. He explained that for many mainstream social media platforms, it can be quite easy. The problem as a user is that you may think that you have protected yourself through your privacy settings, that you're locked down, but the frequency with which platforms change the settings means that unless you are checking each time, you may well no longer have the same level of protection in place, which creates additional difficulties. Operating systems use technology to update regularly, but when we have to become that operating system ourselves for our online accounts, we don’t take the time to do the updates or even educate ourselves, even though it can be quite simple to do.
So, how cyber safe are you? How cybersafe is your business? As cyberbullying becomes a growing concern, then linking your prevention approaches to your cybersecurity practice makes perfect sense. Make sure you review your cyberculture today.
Conduct Change was founded in 2019 with the purpose of changing behaviour in workplaces to create more courageous and compassionate approaches to prevent workplace bullying. The founder, Nicki Eyre, has been through her own workplace bullying experience during her career and recognises the scale of the problem at both an organisational and individual level.
Our Bullying Awareness e-learning course has been produced in collaboration with Bob's Business, and is offered alongside their extensive range of cybersecurity courses.
Cybersecurity is a key element in tackling cyberbullying. Why Bob's business? Simple - they have a clear philosophy with people at the centre. Their approach, like ours, is to influence behavioural and cultural change.
We recognise that workplace bullying is a sensitive topic for many businesses. If you are concerned that you may have a bullying issue in your workplace, or just want help in opening up the conversations, we are here to advise. We offer a free and confidential discussion to understand the issues and explain what options are available for you.
t 07921 264920
Comments