Security, human harm and the technology of social

In a live chat with Ars Technica, former Facebook Chief Security Officer said, “We had ignored that the vast majority of human harm caused online has no interesting technical component. … It’s a technically correct use of the products we build.”

“It’s a technically correct use of the product…” is such a weighted phrase, and it’s a huge problem in the software we build for our communities.

Unfortunately, so much of our community software is being built by people that are either:

  • Naive
  • Purposely ignoring the problem

People act badly in person, but we’ve thousands of years of socialization to keep “normal” people from acting too out of line with norms. Put someone on a phone or computer and they lose those norms. We know this, but our software is not designed for this.

“…has no interesting technical component…” is, however, BS. Facilitating healthy, safe and engaging dialog between humans is one of the greatest, most interesting, problems left in human-machine-human interaction. We have to imbue our technology with … heart? Soul?

The ability to recognize the bad behavior and address it in real time is a (the?) gritty puzzle to solve.

Community Managers have a unique perspective and skill set and we should be out there with our voices and interactions. Help those builders of technology to build something that really will assist people to be excellent to one another.

Let’s go.