Opinion piece: Australia seeks privacy-preserving controls for legitimate use of websites and apps

Opinion piece: Australia seeks privacy-preserving controls for legitimate use of websites and apps

Australian website and app owners have been battling bot traffic for some years now.

Some of these battles are particularly egregious: in entertainment ticket sales, for example, bot traffic is a persistent problem when it comes to purchasing in-demand concert tickets and serving legitimate clients‘purchase requests.

The reality is that varying degrees and sophistication of bot trafficking are seen across all Australian industries and sectors.

The second part of that sentence is particularly important: bots have steadily increased in sophistication, to the point where they can mimic humans very closely, making it increasingly difficult for website or app owners to determine whether the traffic, account creation or login attempts are genuine or not. No.

Established ways of doing this require an understanding of what human behavior patterns are like. This typically requires access to a variety of signals and user data attributes, and over time it has become necessary to collect more of these signals on a more regular basis.

This typically includes the use of client identifiers that track people across the web. While this provides a more detailed look at their usual patterns of behavior, a looming critical point for the Internet industry is that these methods are not as privacy-preserving as today’s world demands.

Attitudes towards privacy have changed in Australia, particularly over the past year, as large numbers of citizens have been caught up in multiple data breaches. These breaches have exposed weaknesses in the collection, storage, and permitted use of data and have given Internet users pause to rethink how they interact with and deliver data to web-based properties in general.

In general, people are now more concerned about who collects and retains data about them and where that data ends up. They have indicated their willingness to create an app and website. options based on the stance of preserving the privacy of those properties. These heightened concerns are also likely to extend to Australian lawin particular changes in the Privacy Law that emphasize consent for data collection and a higher standard for privacy in general.

It is in this context that methods that preserve privacy and separate human and robot traffic are beginning to appear.

The growth of PAT

Although several options have emerged in recent years, one in particular, private access tokens or PATs, has attracted attention due to its origins and high-profile backers, including Apple, Google and quickly.

It is also increasingly an option being trialled by Australian website and app operators, particularly those with an e-commerce presence.

PATs address the fundamental problem The bot mitigation techniques available today are that they treat all traffic as suspicious and rely on user action and browser data to assess risk. They rely on a familiar pattern that a trusted third party can do a better job of verifying the details of an unknown party to a transaction. It’s similar to showing identification to prove your age: a third party knows some type of information about you, and the other party to the transaction trusts that third party.

At a more technical level, when a user (via their browser or device) attempts to connect to a particular part of a website or application (a login page, for example), the browser may be presented with a new HTTP authentication challenge of type Private Token. The app includes any additional context it wants to verify in the challenge, such as the user’s location, and then asks a certifier (currently an Apple device) to verify that the user is on a valid Apple device and has an iCloud account at the time. day for that verification.

Assuming everything is correct, the certifier asks a trusted issuer like Fastly to issue a cryptographically signed token to verify that the client was able to pass the certification check. The token is then returned to the application, which it can use to verify that the attribute check is passed and therefore the probability that the client is human is high.

While there are multiple parties involved, there is a separation of duties such that no component in the chain sees everything.

Additionally, each stage has privacy-preserving elements and no party knows more than it needs to know to perform its role. When the user’s device or browser sends the PAT challenge to the certifier, Apple begins checking the attributes but knows nothing about where this token request came from, or what app or website the user who requested the token visited. All you know is that you have to verify these details. Once this is done, request a token from the issuer. The sender has no idea about any of the steps prior to that point; The only thing he knows is that he trusts whoever certifies. From there, it creates the token and then sends it to the client.

PAT challenges are actively being tested by website and app operators who want bot detection to be more privacy-preserving. Our experience is that they are going live in places where operators might have currently implemented a CAPTCHA.

Given the current state of awareness regarding privacy and the processing of user data, operators are encouraged to familiarize themselves with PATs and test how they can be used to meet the needs of operating web or web-based services. applications while keeping user privacy at the forefront. mind.

Guy Brown is a Senior Security Strategist at Fastly.

Leave a Reply

Your email address will not be published. Required fields are marked *