Tuesday, 22 November 2016

The Basics on How to Handle A Crisis On Social Media

The Basics on How to Handle A Crisis On Social Media | Social Media Today

Just as in life, many things can go wrong with your business. And with social media keeping us connected every second, your response to a social media crisis can make or break how your followers perceive your brand. But the negative effects of a social media blunder can be greatly reduced with swift action and a solid plan.
In this post I’ll review some basics on how to handle a business crisis on social media.

Is this really a “crisis” or just a problem?

Considering the severity of the problem is the first step in social media crisis management. If the problem's smaller, like a minor customer complaint, you may not need to push it too far up the ladder. But if the problem's a bigger issue, like a very offensive post, or caused by an element outside of social media, like a huge delay in shipments, a product recall or catastrophe, you will need to refer to your crisis plan.

Keep Cool & Have a Plan

The fastest way to handle a crisis is to have a plan in place already. Keep cool and asses the situation in full to ascertain the scope of the issue - the last thing you want is to bring attention to something that can be handled quietly. Gather all the details from how people are actually mentioning the issue, where the problem originated, where it was first talked about or posted and if there's any immediate solution.
Your plan should always include:
  • Who gets informed about the crisis
  • Who will be in charge of handling responses
  • Internal communication plan
  • What happens if there is a delay in spotting the issue or if decision makers are away?
  • Content – how will your message be delivered?
  • If possible pre-set messages for smaller issues and set “blueprint” for bigger issue responses.
Once the right people are informed and you’ve analyzed all angles, you can formulate a well thought out response to disperse on your social media channels if needed.
Remember that not all situations require public acknowledgement, but all situations will require this next point.

Respond Quickly

Respond to the issue as soon as possible. The faster you acknowledge the problem, take responsibility and apologize for any wrong doing – the better.
You may need help with this, but run searches for anyone mentioning your brand and try to respond to every concern individually if possible.
Remember to respond to negative comments in a positive tone and show you care about what your customers are thinking or feeling. This will show your customers you care and reinforce brand loyalty.

Pause Scheduled Posts

Until things get figured out pause any planned outgoing messages from your brand or business - you don’t want to seem like you’re ignoring the problem and going about like nothing is happening.
Once the situation has blown over or been handled, feel free to re-establish regular communications.

Take It Offline

While you’re responding to those who have contacted your business about the issue on social media, try to get questions and conversations taken offline. This means giving a place to contact you besides your social networks.
Provide an email address, phone number, or landing page where anyone with concerns can contact you along with your responses.

Keep Everyone Informed

Be sure to update the public on the situation.
Keeping silent will only fuel the fire, and a simple message can go a long way. You can communicate in the form of social updates, a landing page dedicated to the situation etc.

And two things NOT to do:

  • Don't delete/hide negative comments - This will only anger the commenter even more - and people can really fly off the handle, especially on the internet. You should also avoid blocking anyone unless they really become a nuisance.
  • Don’t argue with people - Remember that this is your business, not your personal account. Avoid getting emotional and going back and forth with trolls.
Handling a crisis can be overwhelming, but remember that how you respond and overcome the crisis is what matters. Use social media as a tool communicate and get some good out of a bad situation.
Just like we run fire drills, run through a crisis drill – imagine yourself in the shoes of Blackberry when the Verge pointed out they tweeted from an iPhone, or when a hashtag campaign goes wrong (remember how the #MyNYPD campaign turned into a slew of photos of police brutality?).
Now that you know basics you can create simple plans for different situations so you are always prepared to handle a business crisis on social media.


Author: Dhariana Lozano
Source

Monday, 21 November 2016

Twitter Trials Tools to Turn Trolls Tongue-Tied





Twitter is a valuable online haven for discussing current affairs without fears of censorship. Unfortunately, free speech comes at a price, and that price is the troll toll. No, not that one, Always Sunny fans. The one that means that allowing people to say what they want from the anonymity of their mum's basement will inevitably lead to a spewing forth of vile abuse towards anyone with the temerity to state a slightly unpopular opinion, go against the meta, or just be a woman online.

Thankfully, Twitter continues to implement and enhance features to deal with the troll brigade, and some more were announced today. The features include an upgraded mute function, easier reporting of hate speech, and revamps to the company's internal support teams.

Twitter has introduced several anti-troll measures in the past, such as the flagging of abusive tweets, creating a safety council, and adding a quality filter, but the new additions will help further refine the way the platform deals with the insidious troll threat.

The mute function was first released by Twitter last month, allowing users to mute accounts that they don't want to hear from. This update will take that even further, allowing users to mute certain keywords, phrases, and even entire conversations from appearing in their notifications. This means that inflammatory or simply undesired content, especially those targeted specifically at the user, will be able to be filtered out without even making anyone aware of their existence. However, filtered phrases can still be seen on a user's timeline, so it's not yet a watertight feature.




The next part of the announcement concerns the way in which users can report trolls. Twitter's hateful conduct policy forbids 'specific conduct that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.' The new reporting feature will allow anyone to notify Twitter of offensive speech whenever they see it, meaning the burden is not all on the victim to report abuse.

It's no good all these reports coming through if no one is there to deal with them. That's why Twitter has also promised a refresh of all the support teams that deal with the issue of hate speech. This is coupled with improvements to the internal tools and systems they use to address them. It is of course difficult to fully automate reporting, as that opens up the system to abuse, allowing the very trolls it's designed to target to use mass reporting as a brigading tool.

So just how big is the problem? Well, Mashable wrote an article questioning whether we've reached peak troll, and abuse on the platform has certainly been building to a fever pitch, especially during the recent US election. If users don't feel secure on the Twitter it could lead to dropping user numbers, with monthly user growth already a problem for the platform. There are even rumours that the hate speech issue was part of what led to Disney's withdrawal from a takeover, with Twitter struggling on that front.

It's a difficult dilemma for Twitter to deal with, as there is a fine line to walk between free speech and censorship. Customising user muting preferences seems like a good way to deal with that, allowing potential problems to be nipped in the bud. And that is important, as free speech also relates to people's right to have their voice heard without an oppressive barrage of abuse drowning it out. Twitter recognises the scale of the issue, and the task of dealing with it, stating: 'We don’t expect these announcements to suddenly remove abusive conduct from Twitter. No single action by us would do that.' Indeed, trolls can be resourceful creatures, even starting to speak in code to get their grim message out.

Still, it's admirable that they're trying, both for their own sake as a platform and for the sake of their users. Hopefully the trolls will be gradually driven back under their bridge, and we'll no longer have to pay the price for their shenanigans. Or the troll toll.


Author & Source

National Geographic Turn Their Best Instagram Snaps into an Exhibit in Washington DC


Washington Post
The National Geographic has been a mainstay in photography for as long as it has existed, so it makes sense that they have one of the best Instagram accounts on the market, gotta keep with the times. From the sublime to the humorous to the breathtaking, their feed is an utter joy to scroll through, and now you can essentially stand in the middle of it.

Well, you can do that if you happen to be in Washington DC any time soon. From now until April, the National Geographic Museum (yes, it's a thing) are exhibiting prints of more than 200 of their most popular posts out of the 12,300 or so that they've uploaded since the account went live in 2012.

wtop.com
Nat Geo being who they are, the sheer variety is enough by itself to generate interest, they have photographers literally coating the planet and the kinds of moments they've been able to capture paint a tapestry so vibrant it's seizure inducing.

The first thing you see upon walking into the museum is a giant iPhone with a blown up version of the account itself, and from there you're invited to wander through a literal maze of plus-size prints, meant to represent the narrow avenues that social media funnels you through. Using social media to your advance, and taking a swing at it at the same time; well played, Nat Geo.

wtop.com
The maze features a few alcoves which allow visitors to examine the photos in more detail, and listen to some recorded backstory from the photographer. They can then actually record their own thoughts and feelings on the image, and the best examples get edited onto the end of the photographer's recording. Get that, if you're impressed enough, you can actually become part of the exhibit, and not in a creepy way like that photograph at the end of The Shining.

If this all sounds exciting in one sense, but frustrating in another, far more literally geographic sense, fret not, there's a book you can buy as well. It won't record your thoughts, but at least you'll get a similar sense of how amazing some of these images are beyond just flicking between them on a touch screen.

Author & Source

Facebook 'Fake News': A Scapegoat?



Discretion advised: satirical themes.

One thing you may have spotted in the news recently is the announcement that 'post-truth' is now Oxford Dictionaries' word of the year (hyphenated though it is). Something else you may have noticed are Facebook's latest headline-grabs. Both, it seems, are making the news thanks to their shared roots in the US Presidential election. But let's consider the Facebook side of things.

A week ago, we were noting how its users were talking about the outcome of the vote, and examining what political posts can actually achieve. Things were then scintillated as people began considering the social media giant as a major contributor to the supposedly baffling Trump phenomenon, initially for its apparent facilitation of the so-called 'echo chamber' effect. Now, however, the grand thermostat has moved up another notch (reaching the third degree, as it were), with allegations emerging from various media outlets that Facebook's persistent publication of what's being called 'fake news' was another contributory factor in the surprise outcome of the US election...reaching, perhaps, the stage at which we can start believing that Mark Zuckerberg himself was principally responsible for it. On Tuesday, the Guardian called for all 'facts' to be labelled as such, whilst others have been advocating the use of various Google Chrome extensions like 'B.S. Detector', which alert users when headlines they encounter come from 'questionable sources.' Responding to the growing turbulence, Facebook has announced it will ban 'fake news' sites from its advertising network.

But how far is Facebook embroiled in a 'fake news crisis'? Indeed, given the implications of molding an outlook based upon binary 'fake'/'real' distinctions, is this really what we should be calling it? And finally, are we really stepping through the looking glass, or just casting all our troubles upon a sacrificial lamb? It seems we're straying too close to the latter case. Indeed, what is more, there are at least three ways in which news is generally considered 'fake': one of which, unfortunately, does not necessarily deserve such a label.  

The first type of 'fake news' is that which is meant to be a joke. On Tuesday, the BBC's Newsbeat service interviewed somebody called Chief Reporter, captain of the fake news site Southend News Network. Chief Reporter had that day issued a 'frank apology,' claiming 'full responsibility for Donald Trump's victory in the recent US Presidential election.' Publications like SNN, Daily Mash, NewsThump and The Onion are not only hilarious but are also clearly bogus. That's to say, their creators specifically intend them not to be taken seriously. They're the fun kind of 'fake news' which can't really be misconstrued as anything else.

The second type, however, is that which is designed to have readers believe fabricated facts. Here's where it gets tricky. The Guardian claims that 'more than 100 pro-Trump phoney sites were being run from a single Balkan town' during the election campaigns. Buzzfeed recently reported that 'hyperpartisan' Facebook pages - those allied deeply to either the Democrats or Republicans - were each 'publishing false and misleading misinformation at an alarming rate' (specifically, 20% of the far-Left's posts were phony compared to 38% of the far-Right's). What's more, throughout the primaries and Presidential debates, the BBC's Reality Check service ran fact-checking on each candidate's statements, and found them both to be espousing baseless claims at certain points. These are the main sources of 'fake news' which seem to be attracting everybody's ire. They are intended to be taken seriously, even though the facts upon which they are based are fabricated (although their creators may not realise it).

However, the criticism of these sources, whilst justified, is leading some to conclusions which are not only wobbly but also somewhat unsettling.

The problem is that there seems to be a growing inclination to consider a third type of news 'fake' - at least by implication, by virtue of its not being considered 'real'. This type of news is that which omits facts, arguments and opinions which do not support the author's point-of-view. An easy example would be the kind of 'hyperpartisan' news which Buzzfeed was reporting on above; the majority of which isn't based on fabricated facts but rather upon what one might call, if they were being critical, factual 'cherry-picking'. Here's where it gets really tricky, because commentators often confuse definitions whilst driving an unnecessary wedge between 'reality' and 'journalistic integrity' in relation to these types of media. So, let's unpack those ideas.

Facebook are currently under fire for two things: their censorship of pro-Trump trending topics during the election campaigns, and their refusal to intervene to block 'fake news' from appearing on people's newsfeeds. They're therefore being told to fess-up to two charges: first, that they are indeed a 'news outlet' (like Twitter...although Twitter, for the record, mainly re-defined itself as a news site to satisfy shareholders, not necessarily because they actually are one), and second, that they are therefore lacking the stuff which all such outlets need: journalistic integrity. Their shortage of the latter is being called-out because its net effect was to give Trump the edge.

But let's take a step back. When trying to define terms like 'journalistic integrity', maybe we should bear in mind what big-thinker Immanuel Kant said about ethics: a moral deed is one that is rooted in a 'good will.' From that stance, if an publisher doesn't mean to deceive - if they act upon their ethical principles, which probably shun cover-ups - they could even be forgiven for publishing propaganda. They could certainly be said to have 'journalistic integrity.'

The big question, then: is that the same as 'real news'? Many would say no; 'real news', surely, is that which is true. But...if our society really has been won-over by the likes of Roland Barthes and his fellow postmodernists (which, it seems, a lot of people have), maybe we should actually be responding with a yes. After all, few nowadays would agree that any news can be more than a warped representation of objective truth (if the latter exists). The best things which can remain are merely good intentions. And that seems to be what constitutes 'the real' nowadays, at least when it comes to distinguishing 'real news' from 'fake'. It's why it's more appropriate to define 'real news' as being 'honest news' - that which has 'journalistic integrity'.

Be warned, therefore, all ye who add certain Google Chrome extensions or think Facebook should be rolling-out labels to help identify 'real news' - we might find less a sacrificial lamb and more a wolf in woolly clothes.

As we've seen, the hard question is defining 'fake', whilst the harder question is defining 'real'. It's less that it's difficult to pin-down and delete 'fake news', and more that anybody trying to create or name the 'real news' has their own motivations and outlooks (call them 'agendas' if you don't like what they are). If the arbiter is honest, they will inevitably stumble into fundamental contradictions; and if they're a large, essentially faceless corporation like Facebook, which is not just a media organisation but also many other things (principally a business floated on the stock market, run by a board of directors and operated for profit rather than public enrichment), we'll soon find a serious conflict of interests emerging if we want to see the high ideals of journalistic integrity embodied by such a platform. Imagine the power they could have over our thoughts were they to wield their capacity for censorship in earnest. It's already starting - from the above Chrome extensions to the exclusion of certain news sites from Facebook's advertising network.

What, then, can we do? As always, it's a difficult question. Facebook is all but destined to keep labeling 'real' and 'fake' if they want to keep their site usable. Human editors, algorithms, or any other kind of filter between users and the vast web of hundreds of people in our social spheres (as well as the many thousands who are one-friend-removed, not to mention the rest of the internet) are not only necessary, but will also continue to be necessarily burdened by the same problem: the omissions they have to make entail that people chasing 'real news' are always going to be unsatisfied. The latest notch on the critical thermostat wages war against fabricated facts. But giving any organisation, even by implication, the chance to slap the labels of 'real' and 'fake' upon our incoming data as a means to combat this will inevitably lead to that agent abusing such extensive powers - or at very least applying them with 'integrity' (that is, in an unbalanced way).

We can't leave the thinking up to other people. That only works to a certain extent; and it becomes dangerous if we surrender our scepticism entirely. We do need to see Facebook as a filter for the news; and it would be useful for them to label facts which are verified. But if third parties are then going further, starting to tell us what's broadly 'fake' and 'real', we should remember to disagree with them whenever we can. What's more (and it is disheartening to say this) if all we want through these most recent accusations toward Facebook is a coherent explanation for the upending of the norm embodied by the new President-elect, then fine - but we can't pin the blame on Facebook alone; just as we shouldn't allow any news outlet, real or fake by whoever's estimations, to claim purveyance of 'the' news.

Author: Left Click Right Click Blog
Source

The Most Dangerous Messaging App?


It’s been on the forefront of the messaging app craze for almost 6 years, boasts upwards of 300 million regular users and its core market are teens and younger Millennials.
What is it?
It’s the wildly controversial messaging app, Kik.

What is Kik?

Kik is a Canadian-based messaging app that's totally free. Users can send and receive an unlimited amount of messages (which can be videos, messages, etc.) to anyone with a Kik account - the app works by accessing users’ data plan or a WiFi connection.
To create a Kik account, all you need is an email address and username. To find people, you can search usernames, scan a Kik code, or search a phone number.
Free and fairly private, especially from the eyes of parents, the app’s appeal to teens is obvious. In fact, 40% of American teenagers use Kik.

What’s the Danger?

What makes this app, and many like it, so controversial is that it enables users to remain totally anonymous. Users can communicate without revealing their actual names or phone numbers, and Kik doesn't track the content of messages or the phone numbers of users. This makes it hard for law enforcement and parents to get almost any information about the person on the other end of the message.
Over time, there have even been multiple reports of the app being used to commit heinous crimes, with law enforcement officials even going so far as to encourage parents to delete Kik altogether off of their children’s phones. The appeal of such anonymity is understandable, and a major reason why the app has garnered so many users, but what can be done about the dangers of an app like Kik?
In short, not much. Kik’s CEO Ted Livingston is confident that messaging is the future, and that “chat is the next once-a-decade platform.” If it’s around to stay - and it appear that it is - you can only wonder if the stigma associated with these apps is going to affect their longevity.

The Future of Messaging Apps

Despite the apparent risks of messaging apps like Kik, many people in the industry would agree that these apps will play a big role in the future of social media. Exclusive messaging apps open up new opportunities not available on other platforms. Kik’s CEO even speculates that it might be the “next great operating system”, with apps as browsers and bots as websites.
For one, the marketing potential is huge. Marketers can use messaging apps to contact potential and current customers by using chatbots - Facebook Messenger is already beginning to see success with this ability. Another future use for these messaging apps could be what Kik CEO Livingston calls the idea of the “chat platform” where users coan use messaging apps to do more everyday things like hailing an Uber or ordering flowers.  

Use at your Own Risk

So is Kik one of the most “dangerous” apps when used in the hands of teens or criminals? Probably. But there are others like it and once a technology or product has been created, it’s impossible to take it away completely, especially since 42% of all teens are talking to their friends on private messaging apps already. If Kik were banned, other messaging apps would fill the void.
For parents, be knowledgeable about your child’s smart phone usage and the apps they download. As far as the problem with tracking criminals, maybe this is something law enforcement can look into as a way to bolster their efforts in tracking offenders by using the technology to their advantage too. 

Author: Alyssa Sellors
Source