CONTENT ADVISORY – the following post contains explicit reference to the Data Protection Act and multiple scenes featuring Risk Assessment from the outset. Some readers may find its dullness offensive.
Still reading? OK, you have been warned…
In the rush to celebrate newness and capitalise on all the exciting gains made possible by technology, it’s rare that schools pause and think deeply about the risks of doing so. This is particularly the case where risk isn’t obvious and doesn’t necessarily translate as an immediate Child Protection concern.
I’ve been working with a couple of colleagues in a school who have, I think, taken an exemplary approach in this area that is worth sharing.
The school in question has a 1-to-1 iPad strategy and consequently is producing a lot of data; children are making work, taking photos, having conversations, producing GPS co-ordinates just by walking around. Staff are doing the same, writing reports, assessing progress, making seating charts, et cetera. In the world of the iPad, very little of this data stays on the device for long, most of it makes its way up to some other storage location via the web. This is potentially problematic when you consider that under the Data Protection Act, the school has a duty to ensure that ‘personal data‘ remains under its control.
The trigger event was last year’s ‘Fappening‘ (top tip: don’t attempt to Google that word. I just did and I’ve had to erase my hard-drive and move house). The revelation that every photo taken with an iOS device was silently making its way up to the iCloud came as distressingly late news to hundreds of celebrities with weak AppleID passwords, guessable email addresses and security questions that used information that was in the public domain (‘What was the name of your first pet?’ for example). Clearly, this represented quite a credible CP risk – children take photos of themselves and of each other all the time. It’s not hard to imagine pupils being blackmailed, groomed or bullied as a result of a breach of their iCloud storage. Add to this the fact that all the data held on Apple’s iCloud servers is not explicitly protected by EU law (it’s stored in North Carolina) and the risk is magnified.
Equally easy to foresee is a future in which schools are embarrassed or even sued because of information which leaks out of or is stolen from online systems. The MIS is an obvious place that need securing, but teachers are using multiple other digital methods to record information too. Consider the position of the Government minister or company CEO forced from office when details of their involvement in an alleged racist bullying incident at school becomes public knowledge… Who would be at fault and what might a court consider to be actionable? As we inexorably move towards cloud-hosted services for almost everything, these scenarios need to be considered seriously, and plans put in place to mitigate the likelihood of them happening.
At the core of the school’s approach to managing risk is its assessment. ‘Risk Assessment’ is usually greeted with groans of fatigue, but in this case it proved to be a useful, revealing exercise in forensic educational administration…
Every app that the school uses is risk assessed. The team (made up of the Director of Digital and the Data Protection lead) ask the following questions:
- What does this app do and is it useful? (essentially, what’s the educational value)
- What types of data does it store? (Photos? Children’s work? Credit card details?)
- Where does it store data? (on the device? On servers somewhere else?)
- Have there been any known breaches of its data? (easily Googled/ Bing-ed)
- How do they make their money? (If it’s totally free, then the sale of data is probably how effort is monetised)
Communication with the developers via email is usually enough to get the information required. Using this, the school is in a position to make a judgement about whether the answer to Q1 justifies any risks that emerge from Qs 2-5.
Q2 is probably the decisive one in determining what the risk actually is. If an app just records the date and type of homework set and doesn’t keep any personal data, then in reality the risk is negligible. The point is not to spot apps that use US data centers and then triumphantly ban their use with a whoop of bureaucratic joy – the point is it identify if the risk of using an app outweighs its usefulness. On this basis, the school has only excluded 1 app entirely. They also now have a process by which anyone at the school can suggest new apps be vetted for the approved list.
The third question above is a crucial one for schools in the UK. One of the provisions of the DPA is that personal data is not transferred outside the European Economic Area, because once it has been, there’s no way of legally preventing it being misused for marketing or worse. There are two tiers of ensuring compliance with this:
1. Only using applications that store data in EU data centres. Google and Microsoft both take this approach, as do most European-developed apps.
2. Where data is stored outside the EU, only using apps whose developers have signed up to a Safe Harbour agreement. This gives some level of comfort that the developer will comply with EU Data Protection law, although many would argue that there is no way by which to compel them, making the agreement worthless.
The Government’s recommended educational cloud services paper is a useful starting point to understand the considerations in this area. It is worth noting that only Microsoft and Google come close to full compliance (and both caveat their commitment not to move data out of the EU due to their backup arrangements). There is also the ongoing FBI action that is attempting to force Microsoft to give up data which is of interest to the federal authorities yet which is stored in the EU and protected under its laws… It may simply not be possible for companies with a presence in the United States to ever guarantee that they will respect EU Data Protection law.
So, to conclude, let’s return to the question that triggered this whole process at the school – can the iCloud be safely used in the light of the Fappening?
iCloud is extremely useful and for an iPad school, it’s probably the only realistic way of ensuring that iPads get backed up. The other method relies on users (children and, far worse, teachers) remembering to physically plug in to a computer running iTunes – in short: never going to happen. You can turn off the backing up of photos, but this leaves another problem – what if you actually want photos backed up? How can this be done securely and automatically? The answer to this last bit may well be Microsoft’s OneDrive app, which offers automatic ‘Camera backup’ to an EU-located data centre, secured by credentials that the school controls through its Office 365 domain, including complex passwords.
The school has also taken the step of removing identifiers from its Apple ID creation, including only using the school phone number/ address and linking it to an email address with the anonymous format ‘email@example.com’.
More generally, as Apple’s security is now significantly improved and what the iCloud backs up can be granularly controlled through an MDM such as Airwatch, a school can have more confidence that personal data is not being sent there. A managed risk that balances the value of automatic backups.
As I said at the top of the post, it’s the rare school that is thinking in this depth about cloud data storage and its attendant risks. Is yours?