The proliferation of misinformation on social media – or even just partisan or sensationalistic treatments of politics, science and human relations – could reasonably be considered a threat to democracy itself.

When you add computation propaganda to the mix, where bots are deployed to manipulate public opinion, filter-bubbles form even more readily, and you can now find a closed and self-reinforcing community to reinforce just about any view you can imagine.

Cryptozoology (Bigfoot, etc.) is an old example, but it’s the same phenomenon we see today when people share “the truth” with each other on discussion forums, talk-shows, podcasts and newspapers that are aimed at reinforcing a particular point of view, rather than having a bias towards a certain point of view while still being open to debate and, potentially, correction.

And, if beliefs are not open to correction, they are simply prejudices, and prejudices are not a healthy foundation for informed choice, for democracy (which depends on informed choice), nor, therefore, for society.

One of the things I try to convey in both teaching and writing is the value of intellectual humility, whereby we not only embrace and seek out the possibility of being wrong (because on balance, eliminating our false beliefs makes our overall set of beliefs more robust), but where we also recognise that the most principled and rational response to most questions is “I don’t know”.

But today, it seems that far too many of us have joined the race to be the first with a “hot take”, because everyone is talking about the outrage-du-jour now, and if you don’t get your thoughts registered in time, nobody will ever know just how upset (or ironically detached) you are.

Too many of us forget that you don’t need to pick sides immediately; that you usually don’t know enough to pick sides at all, and that issues are invariably more complex than whatever people are saying 5 minutes after they learn about that issue.

This is partly why I’ve been less active than usual here on Synapses for the past month. There’s simply too much noise, and it can sometimes feel somewhat futile to contribute thoughts that don’t involve being either very upset at something/someone, or alternatively, being vacuously happy about seeing puppies or some sort of rock formation.

But while pessimism regarding our ability to adopt a more reflective stance on what we encounter on Twitter and elsewhere is certainly justified – especially for South Africans, who are 5 months from an election and who therefore get to read a diet consisting of 90% misinformation – it’s too soon to give up hope entirely.

First, those of us who care can still help to foster a community of reason in various ways, including embracing nuance and civil disagreement; posting corrections via letters, comments and tweets; supporting fact-checking organisations, and by subscribing to quality journalism.

Then, we can learn (and teach) some basic principles of critical thinking, to aid in essential tasks like distinguishing between good sources and bad sources, or compelling evidence versus weak evidence.

A few years back, I audited the edX course “The Science of Everyday Thinking“, which is a free resource on how we reason, and how we can improve our reasoning. It’s still available, and still free, and is one of many such resources that anyone (with an Internet connection, etc.) can benefit from.

Another thing that I’ve long believed is vital, but in increasingly short supply, is for us to remind ourselves of how we can contribute to healthy debate in our conduct, via some of the lessons that are taught in what American schools call “civics”.

A healthy relationship with social media is a classic “collective action problem” – we’d all be better off operating in what Wilfrid Sellars calls the “space of reasons” – a context in which we are able, and asked, to justify what we say, rather than just shout opinions at each other.

However, as individuals, we can feel like it’s more rewarding (in terms of the attention economy point made above) to be able to say what we like, joining the brief flurry of outrage, and then moving on to the next outrage as it arrives (usually within hours, if not minutes). But this serves none of our interests in the long-term. So, part of the solution to the uninformative outrage cycle is to resist the urge to join a mob, and to encourage others to be similarly cautious.

Twitter, Facebook, and whatever other outlet you choose would usually not be violating any promises to us via using algorithms to move certain topics closer to your eyeballs than others. More of us watch, and they earn more ad revenue, but they never promised an unfiltered feed. They are giving us what we are telling them we like (except, of course, in cases like the Cambridge Analytica or Bell Pottinger manipulations).

We can assess information more critically than we currently do, and that starts with knowing and remembering that the feed is not neutral. As a corollary to that, we should be wary of embracing the paternalism inherent in the notion that we need external protection from our own worst impulses (even though that is of course sometimes necessary).

Asking corporations to fix our gullibility and susceptibility to sensation delegates a duty to them that we should be cultivating, not least because the information landscape and the nature of the fire hose will change, so any fixes put in place on, for example, Twitter, will not necessarily help on the next platform.

Our own self-interest is served both by being more sceptical, and by helping to foster a community in which epistemic humility and virtue is prized, and we should not delegate that job to Mark Zuckerberg or lawmakers. And we shouldn’t think that people dropping “truth bombs” on Twitter will solve the problem either, because of the filter bubbles discussed above.

In summary, if we are looking for design fixes, or legal fixes, to these problems, we are missing the fundamental issue (which is not to say that those fixes can’t help). More consideration of our individual responsibilities to each other, to debate and reason, and to society as a whole, are what we’re missing the most.

Delegating the responsibility for controlling misinformation to corporations, regulators and government is an abrogation of our own duties, and will only leave us more vulnerable to misinformation in the future, because instead of encouraging people to think, we’re letting those agents do the thinking for us.

Many of us can’t do long division anymore, or write in cursive, and we might only remember a handful of telephone numbers. That’s fine – I’m quite happy to treat the Internet as an “extended brain”, even though it can leave us quite crippled if it falls over, or your battery dies.

But the basic tasks of forming opinions, having debates, or finding common ground with intellectual adversaries can’t be performed by our smartphones, so let’s take care to not forget how to do them ourselves.