It can be triggered as part of a website's default procedures or by rapid activity on the user's part: the CAPTCHA's (Completely Automated Public Turing test to tell Computers and Humans Apart) reCAPTCHA forms are likely familiar to many moderners who use the Internet. A set of tiles with images appears onscreen, and someone has to click on all the images that, for instance, contain bicycles in order to be marked as a human rather than a software or robot. Alternatively, a box holding an even smaller box to be checked next to the phrase "I'm not a robot" might appear, and the human user is to use their computer mouse or their device's touch screen to click the box and pass the test.
This is in place to thwart bot activity, the online doings of an automated program. Mere boxes meant to be clicked on by human users of desktops, laptops, or smartphones are ultimately inadequate as an attempt to epistemologically distinguish between robots and humans (really, anything is in an ultimate sense). Either a software could be programmed to click the box or a physical machine could perform the action if an android or similar robot was to grasp the computer mouse. However, the goal of reCAPTCHA is not to just use the clicking of the box as the criteria for treating a user as a human--it is the measurement of mouse movement before someone clicks the box that is used as the criteria.
A human user is very, very likely to make small deviations from a straight line to the box to be checked. It is not that a person is moving their mouse cursor, if a desktop or laptop is being used, in broad, erratic spiral motions as they move it towards where they need to click for the reCAPTCHA, but that very small, perhaps unnoticed "imperfections" in the mouse movement are a very probable indicator of a human user. Taking time to move the mouse as opposed to having a bot automatically click it is another evidence that a human is the user--and yes, when I say evidence, I mean something that falls short of logical proof, a fallible but probabilistic support.
A bot could of course replicate more "human" habits here. It is obvious (to a genuine rationalist that knows consistency with logical axioms dictates this) that it is logically possible for a machine or software to hypothetically move the onscreen cursor with more erratic trajectories that do not travel in straight lines. All it would take is for it to be programmed that way and then execute its function without error, or, if there was a genuinely conscious software entity [1], to adapt accordingly with intentionality behind the adjustment. As for a physical android or other machine rather than an immaterial program, sentient or not, of course such a robot could--whether it is the result of a deterministic programming or conscious choice on the machine's part (something that could never be proven)--wiggle a mouse in a way that is treated as "human".
The difference between software and hardware is that a program runs on a machine and is intangible, while a machine is the collection of physical components which can be engineered to run software. Either a software without a separate machine body or a literal machine, even a wind-up one, could pass the modern evolutions of the CAPTCHA test with ease under certain conditions. If an android a person directly stares at was realistic enough, no outward distinction would be discernible between it and an actual human. This is all the more the case with the online activity of a distant user whose face and behaviors cannot be seen by mere website activity anyway. Probabilistic evidence is enough to act as if an entity is a person or software for reCAPTCHA purposes, but only a fool would think they can have absolute certainty, which can only be derived from utter logical necessity with no epistemological assumptions being made, that an online presence is due to a human, an automated bot, a conscious software, or a physical machine directly using hardware to access the Internet.
No comments:
Post a Comment