“They can assume all manner of shapes at their pleasure, appear in what likeness they will themselves…they are most swift in motion, can pass many miles in an instant…”
-from The Anatomy of Melancholy by Robert Burton (1621)
Almost four hundred years ago, a young Oxford graduate named Joseph Glanvill published The Vanity of Dogmatizing, a masterpiece of natural philosophy and spiritual enquiry. In the path-breaking work, Glanvill presciently mused about “aetherial” telepathy, anti-ageing technology, and human-powered flights to the moon:
“It may be some Ages hence, a voyage to the Southern unknown Tracts, yea possibly the Moon, will not be more strange then one to America. To them, that come after us, it may be as ordinary to buy a pair of wings to fly into remotest Regions; as now a pair of Boots to ride a Journey. And to conferr at the distance of the Indies by Sympathetick conveyances, may be as usual to future times, as to us in a litterary correspondence. The restauration of gray hairs to Juvenility, and renewing the exhausted marrow, may at length be effected without a miracle…”
In Glanvill’s eyes, these speculations were hardly utopian fantasies; on the contrary, they were postulations of what could be, visions of a futurity made possible by the ever-evolving sciences. Glanvill was clearly ahead of his time, but he was not the first to rhapsodise about the rapid accumulation of new knowledge. He was also not the first to seriously consider and embrace the idea of actio in distans (action at a distance).
For ages, this concept intrigued metaphysicians who were fascinated by the spirit world and its interactions with physical life. Many who lived in Glanvill’s time were influenced by a long line of intellectual luminaries which stretched back to the ancient Stoics. Yet it’s doubtful that any of them could have predicted the rise of the internet and its discarnate inhabitant, the ubiquitous social bot.
Bots, one could argue, are not dissimilar from the early modern idea of familiar spirits. After all, this relatively nascent—and these days, much-maligned—class of “beings” is biologically lifeless, like viruses, and yet they are the most active denizens of cyberspace. Amorphous, intangible, and superhumanly fast, they are excellent vectors of information. Like the faithful servitor of Eugenio Torralva (the “Arch-magician of Castile”) and the Drache (treasure-finding dragons) of Eastern Europe, they can use their abilities to remotely carry out clandestine tasks and influence world events.
Therefore, it comes as no surprise that governments have incorporated bots into their influence operations and fitted them with bandoliers of memetic ammunition. Their usefulness as troll-soldiers perhaps lies in the fact that they can be mass-produced, stockpiled, and ordered into battle to serve as disseminators of agitprop and to engage in “bad-jacketing” campaigns. In recent months, Russia has taken most of the blame for this kind of hybrid warfare, but the truth is that organisations like NATO and DARPA have long dabbled in the shadowy world of bots. Unsurprisingly, the record shows that the U.S. government and its European allies have been openly experimenting with this kind of technology since at least 2014.
“‘A false report, if believed during three days, may be of great service to a government.’ This political maxim has been ascribed to Catharine de’ Medici, an adept in coups d’état…”
-Isaac D’israeli
For the above reasons, bots are quickly becoming the bête noires of the digital age in the public mind—and yet few have sought to link them to the spirits of the Old World, the entities that once enthralled and haunted the minds of generations of philosophers. One filmmaker, however, is currently on a quest to investigate this conceptual relationship. Her name is Heather Freeman, and her film Familiar Shapes is slated to be released in 2019. Freeman, who is also a professor of art at the University of North Carolina at Charlotte, kindly agreed to an interview. Below she gives us a sneak peek into the origin and scope of her incredible documentary.
The Custodian: When did you first begin to think that there might be a link between social bots and familiars? Were you previously interested in witchcraft lore and programming?
Heather Freeman: Around 2015, I learned that an acquaintance of mine was Wiccan. I didn’t know anything about Wicca at all and being an academic (and most academics love new things), I dove into Margot Adler’s Drawing Down the Moon and Ronald Hutton’s Triumph of the Moon, which led into various research rabbit holes, not to mention a whole new bookshelf in my house. Well, bookshelves. I love the layers of mythology in many contemporary pagan practices (and I appreciate that those are fighting words to some, but I mean it with all respect and appreciation for both the history and the mythology). Anyway, a lot of discourse surrounding the impact of the [Margaret] Murray thesis and [Sir James] Frazer’s works on contemporary paganism led me to more recent scholarship on the early modern witch trials.
So as this is evolving, Brexit happens, the 2016 elections happen and I’m sort of watching social media devour itself and wondering what the hell is happening to the world. It was sort of painful watching social media consume my friends, colleagues, and students, and not just by playing Fruit Ninja. It was almost like they were surfing OSNs (Online Social Networks) just looking to get triggered by something. And I saw they were extending this knee-jerk response to challenging information into real life. I also saw other friends getting worn out by the whole thing and ditching their social media accounts entirely. I did, too, though I confess I revived them so I could start promoting Familiar Shapes. Like many people, it was in news reports about the 2016 election that I first heard the term “malicious bots”. And I started chasing down this other line of research.
Then, in February 2017, a friend texted me. She knew I was interested in contemporary and historic witches and wrote, “Did you hear about the mass Facebook spell against Trump?” I hadn’t (because I was off Facebook at the time) and when I found an article about it, my brain kind of exploded. I’d known that contemporary witchcraft had grown a lot more diverse and layered since the days Gerald Gardner and Alex Sanders, but it never really dawned on me how social media was a tool for contemporary witches.
When I first started thinking about making a documentary, it was totally different: it was going to be about witches who used OSNs to conduct rituals and spell work. Cyber witches and social media, it was going to be brilliant!
Only, once I started researching this, I couldn’t actually find any witches who worked this way. I just assumed witches were making bots and treating them like digital familiars, using online spaces to shapeshift, the works. What I did find are some very good articles from the 1990s on “cyber witchcraft”, and a number of books that mention how some witches are programmers (and some programmers are witches). And The Wild Hunt had some good articles on witches, who made mobile apps for tarot divination, and they were kind enough to share some promising leads with me, but mostly I hit walls. It seems like right when social media gained popularity, this line of scholarly inquiry drops off. Witches are understandably quite private. I was likely looking in the wrong places, in part, and I’m always happy to get new leads. I’m still very interested in this topic, and would love to speak with magical practitioners who use OSNs for workings or ritual, but this will be a different documentary for another day.
As I’m trying to track down invisible cyber witches, I’m reading more and more about social bots. These were completely fascinating to me, and the parallels between the programmer-bot and the witch-familiar were just too poetic for me to ignore. The more I read about both, and especially after I started talking to various scholars, the more distinct they became but tied ever tighter together through nuances of human behaviour; our biases, how these shape our social behaviour, and the challenge of “truth” versus “belief”. Eventually, I accepted where the research was taking me, and the documentary clarified into the current outline for Familiar Shapes.
Since malicious bots are often programmed to just re-tweet or re-post articles with clickbaity headlines that trigger our confirmation biases, they’re very effective in exacerbating online echo chambers; basically enhancing what we already, naturally do. It seems that these same confirmation biases can even be gleaned from the early modern witch trial records and pamphlets. This seems to be especially the case in local courts, as trial results were determined more by the biases of those courts, than the evidence per se. Changes in the developing judicial systems were often designed to prevent biases from affecting trial outcomes. We’re still working on improving this to the present day.
But biases and motivations are powerful phenomena, and manipulations by social bots have had very real world, physical results. Bots have successfully organized real-world rallies (with the participants often mystified by the rally organisers never showing up). And there’s reason to believe that if people thought a witch cursed them, they would, in fact, get ill because belief can be much stronger than truth. Belief is more powerful and instantaneous than reason. This can be mitigated, of course. Reason can shape belief, for example, and we can craft action through a fine harmonising of belief and reason. But we can only do that if we accept that we’re each driven first by our biases, second my reason. We can’t help it. It’s hard-wired. But knowing this about oneself is the first act of resistance.
C: With regard to the pervasiveness of digital fake news, two notable studies have revealed some staggering realities. The first, an exposé by the New York Times, revealed the existence of a booming international industry in which faceless persons around the world help to mass-produce legions of followers for celebrities and digital influencers. The second, a joint study by the University of Massachusetts Amherst and the University of Leeds revealed that, in the Philippines, public relations executives are often the “chief architects” of disinformation. With these studies in mind, would you say that the production of such media is mainly business-orientated—symptomatic of an age where “branding” and website traffic are paramount?
H: Yeah, and this gets to the heart of the matter. Bots are really, really easy to make and cheap to buy. My niece could babysit for a few hours and buy armies of bots, they’re just pennies each. You can also buy fake IP addresses and phone numbers (also for pennies), making it nearly impossible to trace the source of the bots, or the trolls who manage them. So, it makes economic sense for companies and political groups to use them. I’ve heard folks argue that it’s cheaper to buy Facebook ads than to run accounts, but if this were the case, why wouldn’t companies and political campaigns do that instead? Shady ethics are apparently a small price to pay for the more effective toolset.
For the sake of designing clear, repeatable studies, researchers tend to focus on either bots, trolls or disinformation, but these work in tandem. Because it’s so easy to hide the origin of a bot, they’re perfect tools for spreading disinformation. And by using fake accounts with hosts of fake followers, it’s not hard for a troll to infiltrate an actual network of friends. Let’s say someone has six hundred friends on Facebook. Odds are they’re more likely to accept a friend request from someone they don’t know. So, trolls try to friend people with high friend counts. Once they’re in, it snowballs. You might have, say, fifty friends, but if you get a friend request from someone you don’t know, and it says they share one mutual friend with you, you’re more likely to accept that request. Most of us are guilty of this (I know I am) because we’ve never had a real reason to mistrust these systems. Once a troll is “in”, it’s relatively easy for him or her to get the botnet connected, and then use the botnet to spread disinfo around the network.
I should probably clarify some terms here. It’s not just disinformation (truly fake news) that’s a problem, but also misinformation (inaccurate news), propaganda, clickbait, hoaxes, and even satire. (It’s depressing as hell to see smart people re-post perfectly good satire from The Onion with comments about how horrible and depressing it is, and how can the world be coming to this, etc.) Okay, that’s human error, sure, but bots are definitely involved in spreading other forms of disinformation and propaganda. And since we trust information basically based off of crowd-sourced verification (1000 likes equals more “verified” than 2 likes) an army of bots can be frighteningly effective.
C: Interestingly, researchers have revealed that groups of trolls, whether vigilantes or provocateurs comprise both human and bot accounts. Would you call the political and social use of these coordinated virtual mobs “sorcerous”?
H: It does feel like an online illusion or glamour. We’re constantly reading the small, social cues of others, be they physical or chemical (or magical). But we largely lose these cues in online spaces. This is why the poetic overlap with magical traditions is so alluring to me: the troll goes out into cyberspace and manifests his or her spirit army. Or maybe the troll is more like an elven king with his host of beautiful, sylvan faeries compelling the unwitting mortal to his side (OSN trolls are notorious for using photos of beautiful women on their fake Twitter profiles). They also remind me of fetches. The troll is a human being, after all, sending out his or her digital fetch into cyberspace, surrounding this fetch with his bots, his spirit familiars.
It’s also sorcerous in the sense that the troll is shaping one reality into another in order to fulfil his or her desires. The social and magical ethical dilemmas are also similar. I realise and respect that not everyone adheres to “An’ it harm none”, but if this is an ethos you try to follow (or the Golden Rule, or whatever), then creating and disseminating social bots presents a real ethical challenge.
In using bots, you are knowingly shifting your own reality (for example, the appearance of your own popularity online) in order to manipulate the perceptions and eventually opinions of others. Is this interfering with free will? There are good arguments against it, but I’d say yes. In making and disseminating bots, you are performing an illusion, likely a glamour upon others. And you’re doing this in order to transmute something larger.
And in an information war, how does a nation retaliate while maintaining its ethos? I just don’t know: that’s beyond my scope of expertise. But a three-pronged approach of public education, business self-regulation, and national policy seems the best bet. Right now, however, we’re just barely keeping up with the trolls, bots, and disinfo. With programmers and propagandists as with the sorcerers, it’s a wizard’s war. Meanwhile, we’re a bunch of angry villagers, standing around wondering why all the beer is sour, and ready to hang Goodwife Doe for witchcraft.
C: Given the fact that bots are evidently entrenched in our daily interactions in cyberspace, do you think we might fall into a runaway-bot nightmare scenario?
H: Yeah, I only wish this were an episode of Black Mirror.
Most of the computer scientists I’ve spoken with are very pessimistic. We’re in a kind of Darwinian race against the bot programmers: researchers find new ways to ID bots and disinfo, they publish the results, the tech companies take note and change their policies (sometimes), but by the time this happens, the bot programmers have already found a clever way around it. We’re not able to get ahead of them, and it’s very frustrating for researchers.
Lower level bots are so easy to spot. Twitter and Facebook could block them today if they wanted. It’s the higher level bots combined with micro-analytics that keep me up at night. For instance, one of the most challenging bot-types to detect is the human-assisted bot. Scholars don’t have a commonly agreed upon terminology for this yet (that’s how fast the game is changing) so you might also hear them called hybrid-bots or cyborgs. These are bots that are sent out but closely monitored and handled by a human (who is sometimes also a troll, but not necessarily). Most bots “speak” with a sort of language grid (article, subject, object, verb, etc.) or with short statements that are triggered when keywords are detected. These types of bots are pretty easy to spot: they just speak really funny.
But when someone is interacting with a human-assisted bot, the human user can jump in and control the bot, massively increasing the believability of the bot’s behaviour. Talking with a piece of software like Siri or Alexa feels okay, even playful because you know its human-facade is really thin, it will never interact with you like a real human. But having a human jump in — like what happened to me on Facebook — is extremely disconcerting since you think you’re talking to a bot the whole time. It’s like speaking to the dead on Halloween as a lark when you don’t really believe in the afterlife. Suddenly the voice of a dead Aunt Mildred is telling you to fix a leaky pipe in the house, and your entire foundation of belief is shaken. Except the hybrid-bot is much more insidious (Aunt Mildred is just concerned with water-damage, after all).
The thing that makes me worried about this whole discourse is that it’s so focused on Russia’s interference with global democracies. This is definitely a thing, I don’t want to play that down. But we need to be really concerned about domestically funded botnets as well. Your readers in the U.K. and U.S. should look up Cambridge Analytica, a company that does data mining and analysis. On the surface, this sounds like normal, boring tech stuff. But their research helped political groups micro-target voters in the Brexit referendum and the 2016 U.S. elections. Nothing they did appears to be illegal, but the ethics are very questionable.
As one of my interview subjects put it, back in the 1980s, you could put signs in your yard or host a fundraiser for your candidate and it was all very public. But the micro-targeting is extremely private, void of accountability, and scarily effective. The way they collect and analyse data is pretty damn close to mind-reading. So, imagine a political campaign that knows you better than your spouse or partner, and shapes its message to you, personally. That happened in 2016, and there’s no reason to think it’s not continuing.
Here’s the thing, though: I’m sure Russian bot farmers also think they’re doing the right thing, that they’re contributing to their nation’s best interests and the well-being of their countrymen. And even the ones who feel badly about it are probably doing it like the rest of us: they have bills to pay, kids to feed, and it’s a job. While some individuals who make and sell bots are inspired by anti-globalist ideologies, most just do it for the money. We’re human, and I’m just not ready to judge them.
C: Have you ever tried to create a bot? Is this something you’ll attempt to do in the documentary?
H: Yes, actually, it’s wicked easy, and I highly recommend it. It’s a great way to learn some basics of programming. And if any of your readers happen to be magical practitioners themselves, I’d love to know if anyone uses them for magical ends and how that works out. I think there’s a lot of creative potential here.
But yeah, I was talking with my husband Jeff Murphy (who’s also an artist) about bots, and we decided to use software called Tracery (designed by Kate Compton) to build one. Anyway, we built an art critic bot called @BertaFukoff (several art criticism nods in there) and had her wander around Twitter, offering her sage opinions on high art. For Berta, all great art related to pie, preferably pecan pie. So she’d say things like “This could be the seminal work of our age, but it needs more pie”, or “This is reminiscent of apple pie, the interdisciplinary crust browned to perfection”. Anyway, poor Berta was flagged and deleted during one of Twitter’s bot purges, but it was fun while it lasted.
One thing I’ve been trying to get Twitter to answer is if known bots are considered users. Bots aren’t evil in and of themselves, and there are lots of bots on Twitter that are very up-front about their bot-ness. According to Twitter, you’re not allowed to make a bot that posts on another user’s feed. But it’s unclear if that’s still true if the other user is also a bot. So, I’m teaming up with some folks to make a cluster of “bot familiars”. (These are going to be very up-front about being both bots and familiars, by the way.) Then I’ll send them out to known malicious bots and just have them bomb their feeds with spells, limericks, jokes, Black Phillip memes, the works. At least, we’ll do this until Twitter purges us. It’s digital-magical martial arts: if the bots push, we’ll pull.
Want more stories? Check out our spin-off project, Godfrey’s Almanack.
–
1 thought on “From Spirit to Social Bot: The Familiar Shapes Documentary”