Trust, Community and Your Customers
We should probably finish learning how to do community well in web 2.0 before we take all our bad habits with us to web3.
In 2009, when we first started talking to consumers using online video diaries, the platforms were set up to function a lot like early Facebook – reverse chronological order, everyone able to look at and comment on each other’s posts. We’d invite a dozen or more people onto a platform, and once they had posted their responses to questions or assignments, they’d get to look at the others’ responses. People would like each other’s posts, ask each other questions, offer each other positive feedback and encouragement. Meanwhile, online focus groups could be a bit chaotic – the software was often buggy and laggy and people wound up talking over each other a lot.
By 2015, all that was over. People told us they didn’t want strangers to see their posts anymore, and if we asked them to, they limited the photos and video they’d share, defaulting more to text or selfie videos with no context, or uploading images from an online image search. We were conducting many more depth interviews using tools like Uberconference, Skype, and others – so we could share screens and talk and record all at the same time – and avoiding online group discussions because we found those exhausting.
By 2019, people were happy to talk one on one using video conferencing software like Zoom, but by 2021, they preferred to do so with their cameras off. We all get that. Online focus groups were no longer so chaotic, though – people were used to meetings and classes being held this way, so they waited their turn to speak, or used emojis to indicate they had something to say. There were more, often non-verbal, methods for being courteous to other participants, and people had learned a politesse of video conferencing – ways to be polite while disagreeing, and ways to recede from the conversation while staying present in the face of something unpleasant, whether that was a topic or another participant.
As market researchers start to inch back into focus group facilities and in-person interviewing or observation, I wonder how these online-endemic interpersonal skills will translate back into meatspace. And as marketers chase the new hot thing – the metaverse, web3 – I wonder how users will translate interpersonal norms to virtual environments.
We’ve been thinking about the nature of community and trust quite a bit recently, thanks to projects related to online safety and communication in low-trust environments. Here’s what I’m noticing as we do this work, and as we live in the world:
- Communities are hard to manufacture. You can get people to convene in a place, and over time they will create norms and language and hierarchy that serves both that place, and the members’ perceived purpose for being in it. If one group of people understands a community, for example, to be primarily social and transactional – about networking, job hunting or profile-raising – and another group of people understands that community to be something more intimate and values-based – about social change or cohesion – then clashes will occur. Values-based groups want to enforce their values – and therefore think they can ‘call in’ members who violate those values. But transactional groups are not so communal; they’re a cluster of individuals trying to maximize value. When members of the group have different visions of the group’s purpose, you get values-members thinking they’re ‘calling in’ members who run against those values, and transaction-members feeling they’ve been ‘called out’ by people they don’t know or trust. We see this in smaller membership groups, on Slack or listservs, especially where moderation is light.
- Tools for enforcing community standards are hard to control. We see this on platforms like Facebook where some folks we’ve talked to recently tell us that reporting features for objectionable content are more often used to simply harass other users – and that bots and trolls use these tools at scale to stymie people genuinely trying to build community around common interests and values.
- People need to be constantly reminded of the values, principles, and rules that bind the community together. Without those reminders, people will assume that the rules are simply ‘common sense’ – of course, common sense depends a lot on the person wielding it. What is benign to me, may be belligerent to someone else; what is offensive to me, may be ordinary to you. We’ve noticed that Peloton actually manages this extremely well – the coaches constantly remind people of the values of Peloton, how the key metrics work, and provide supportive and inclusive frameworks for thinking about leaderboards and achievements, rather than competitive or privileged frames.
- Communities thrive when there are carrots, sticks, and discretion. Sure you need ways to provide positive reinforcement, and ways to give critical or negative feedback, and ways to set your preferences and have it respected by the algorithm. But you also need that mirror of trust – discretion. The platform or community lead needs to trust members to exercise discretion in their behavior and what they share; it also needs to trust them to use discretion in choosing what to engage with. We have to know that a community, even one convened around shared values, still has a lot of diversity within it (even, perhaps surprisingly, when the community itself is not all that diverse!), so one size simply can not fit all.
Ultimately communities require a lot of trust. This is particularly important when trying to understand your customers in any situation. As market researchers who use platforms that borrow social media mechanics and resemble community-led platforms, we have to establish a baseline of trust with our participants, our clients’ customers and prospects. How do we do that?
- We have to ask permission – to talk to us at all, on these platforms, in these formats, about these topics.
- We have to understand the context to know when it’s appropriate to try to foster a sense of community among our participants, and when we are trying to establish trust-based relationships between the participant and the researcher.
- We have to hold space for pluralism – we’ll encounter a variety of definitions of ‘common sense’ and values.
- We have to model the behavior we want to see – the candor and the presence and the respect and the generosity. And we need to be thoughtful about what kinds of behavior we want to limit or proscribe.
In other words, we have to finish learning the lessons of Web 2.0 before we screw up web3. Or maybe we’re already too late for that.
A few more thoughts on community:
For some truly, jealousy-inducingly great writing and thinking on Peloton, Anne Helen Petersen’s substack is the bomb dot com.
And here’s what the rest of the team have noticed lately… First from Ashley!
In this video, Paul Mealy (Product Design Leadership @ Meta; Author “Virtual Reality and Augmented Reality for Dummies”) features a prototype for an AR experiment he’s testing, live captioning via body tracking. He shares, “one of the issues the deaf and hard of hearing have had to deal with during the pandemic is masks obscuring facial expressions, which often provide cues to what a person is saying. By utilizing captioning, we can help facilitate communication regardless of if the user’s face is covered.” I appreciate the leverage and thought that is going into the inclusion of the deaf and hard of hearing communities and know that regardless of hearing loss, many others can benefit from this type of tool. It is, however, worth noting there are many within the deaf and hard of hearing communities that have a vestibular condition meaning the constant motion of the body tracking can be dizzying. This is just a prototype, of course, so there’s still so much progress to be made. But I can’t help but admire the comments from the LinkedIn community on feedback / suggestions / recommendations. Co-creation comes from community after all.
link: https://www.linkedin.com/feed/update/urn:li:activity:6901902718886129664/
And here’s what caught Nic’s attention:
In order to combat offenders from simply making a brand new account as soon as they learn they’re banned from a specific channel, Twitch has deployed a Suspicious User Detection tool. The tool alerts creators and chat mods when a user has been flagged as a Suspicious User, based on criteria unknown to the general public and signers of Twitch’s ToS.The new Suspicious User Detection tool likely relies on criteria such as IP Address, Browser identifiers, time and date, and new-username similarity to recently created accounts, among myriad other indicators available to Twitch.tv Developers. The process through which a new account may be flagged as a Suspicious User utilizes elements of machine learning to automate the task of deciding which accounts may be repeat ToS offenders. Twitch.tv acknowledges that there will be discrepancies, particularly right at the launch of such a system, and they will heavily rely on Creators and Chat Mods to guide the system to one which can discern contextually appropriate ‘normal’ behavior from malicious abnormal behavior. Should this system succeed at regularly weeding out bad actors from those who play nice, a significant labor will be lifted from the shoulders of content creators and their moderators – permitting greater focus on the community they wish to foster.
Link: https://mashable.com/article/twitch-ban-evasion-detection
Okay, that’s it from us this month.
Be nice to each other.