Return to site

 

 

 

 

How the Human Algorithm Adapts to Digital

 

 

 

 

by Ashwin Rajan

As facebook has grown more successful, its core algorithms have grown more biased. This isn't necessarily a bad thing. It helps the platform benefit users, partners - and itself - in a number of ways. But those biases can be at odds with the emerging desires of users.

To illustrate, I want to discuss how the famous Facebook 'like' can be a tussle between digital algorithms and human (cognitive) ones.

Facebook's algorithm, driven by its business model, is biased to maximise the amount of time I spend on the platform. More time means more potential attention. More potential attention means advertisers will pay more to feature within that span of attention. So far so good.

But me, a 21st century hyperconnected human, am running out of attention. I do find Facebook an invaluable service to keep in touch with my 2000+ friends. But to do so I increasingly want to see as few posts from them as possible. This is an emergent core goal for me as a Facebook user. This is not inherently because I dislike what my friends post. It is because my attention has become characterised by rapidly escalating opportunity cost.

Human attention in the digital era is characterised by rapidly escalating opportunity cost. Give attention to something, anything, and you immediately forego the opportunity to attend to everything else.

In learning to navigate the seduction of Facebook's algorithms to my own advantage, my human algorithm has become smarter. It has honed several tricks to outsmart Facebook's own. This includes the enormous time I have spent cumulatively over time to curate my Facebook feed. I have carefully pruned away contacts and pages that provide little or no value.

But primary among these tactics to get the most out of Facebook is my evolved use of the 'like' button. You see, Facebook thinks that when I 'like' a friend's post, I want to see more of their posts. That is how it is designed to interpret my like. In contrast, when I 'like' something I actually can MEAN one of many things, such as:

  •  "I support what you are doing. Full power to you!"
  • "I really like this piece of content you posted."
  • "Hey, how's it going? Been a while!"
  • "It was nice to meet you."
  • "It was awesome to meet again last week."
  • "Aw, I do like how you think."
  • "Mum, you are the best."
  • "I have always admired you."
  • "I am so glad we are friends."
  • "I never appreciated this connection we have as much as I do now."
  • And so on and on ...
  • Some combination complex of the above.  

Note: none of the above directly translates to: "I want to see more of what you just posted." But that it how Facebook treats my likes. Thus, it serves me more of the same, similar or related content. Or then it serves me more from the same friend. My human algorithm over time has caught on to Facebook's behaviour. I have adapted the use of my likes accordingly. My 'likes' have now in effect become a sort of currency that I am careful about giving away too easily. I won't give one away unless there is the promise of it bringing me the kind of returns I desire. And I can sense where those returns may lie - intuitively.

I seem to use my 'likes' as a kind of social currency, giving them away only when I know they fetch me the returns I desire.

Facebook isn't unaware of this emergent user behavioural pattern. The platform senses that a 'like' qualifies something more complex than a mere sign of appreciation. So some years ago Facebook introduced a nifty emoticon palette to help the user better show the intent behind a 'like'.

This little widget has to be given credit as a step in the right direction, simultaneously for the platform and its users. But the widget still doesn't come close to the kind of nuanced feedback I am actually giving, as seen in my examples above.

The 'human algorithm' is adapting to the digital one

There is much talk about how algorithms are eating the world. The popular imagination is in awe with their smarts. We tend to think algorithms are getting so smart they will outsmart us very soon. But unnoticed, another algorithm is adapting rapidly in response to the digital one. I'm talking about the human algorithm.

Ten years ago I didn't know something called a 'like' button. Now I know fifty ways to use it. That's how powerfully the human algorithm can adapt to digital tools.

The human algorithm develops an incredible degree of sophistication in how it uses tools. And fast! From the human perspective, using Facebook's 'like' feature is less like pushing a button and more like wielding a paintbrush. Hint: it's about expression.

LinkedIn's recent emoji feedback panel is going after the same holy grail. It is more nuanced, better contextualized to a professional network. The question then is: what can LinkedIn do with those signals to make the platform experience better.

Instagram - 'mute' is a life, and face, saver!

One way I curate my feed on Facebook is to simply un-follow someone who is posting too much irrelevant stuff. On Instagram, this used to a problem. Un-following someone meant they were removed from your network. Instagram caught on to this and has now added the 'Mute' option. By muting someone, I can still be connected to them but not be bombarded with their posts. And when I think of that person, I can look them up via Instagram Search to catch up on their posts. Great! Now I don't need to entirely lose someone because I did not want to follow them. Which also means I can start to follow a lot of accounts I did not want to earlier for fear they would crowd into my feed.

Here's another example of the human algorithm adapting to digital features.

The GoPro on the right came to market first. Its front-facing screen does not feature a live video preview. The Osmo on the left is the category challenger. It comes with a large front video preview screen. Reviewers who have loved the GoPro comment on how quickly they get used to having a front preview on the Osmo. So much so that after a few hours of using the Osmo they find the GoPro's lack of preview 'weird'. That's how fast the human algorithm assimilates a new product feature into itself.

How would a human do it?

In general, social media still need to correlate with a ton of other data points to understand what a 'like' or 'heart' really means. Contrast how a human would do it: you know two friends Person A and Person B who are in a relationship. Imagine Person A 'hearting' a photo of Person B when they are together. And then imagine Person A 'hearting' another photo of Person B a few months later. Except they have broken up now. You know this. Facebook knows this. But the nuanced way in which you understand that second 'heart' is something Facebook's algorithms can not. Not yet, at least. Is this a human ability referred to as 'emotional intelligence'? I like to think of it as an essential element of the human algorithm.

The way forward = Machine + Human

No wonder there is a lot of talk on complimenting machine with human intelligence. Or rather it should be the other way around. Netflix's recent move is a clear indication of simpler but no less important attempts at finding this sweet spot. They're experimenting with adding human algorithmic power to augment their service design.

From this link:

"Netflix’s human curation takes a different approach than AI curation, in which the categorization of content is narrower. According to TechCrunch, titles in Netflix Collections are curated by specialists based on “genre, tone, story line and character traits.”

AI has worked well for the company in content curation before. However, it hasn’t been able to refine content suggestions to subcategories such as theme. Machines are able to crunch large data sets to come up with hundreds of suggestions, but the lack of a human touch leads to the inaccurate personalization of content recommendations. This is the gap the human-led categorization aims to fill."

Update from Sep 13th 2019:
We will see more human-machine hybrid combinations in providing enhanced services. Now, Facebook is looking at combining human editorial power on top of its algorithms.

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK