Skip to content →

Running Social Media Campaigns with PowerShell

The Social Media Scripting Framework has been published now for several weeks and the feedback that I’ve been collecting so far it’s been quite positive. In fact, I’ve learned a lot from the conversations that I’ve had with some of you and I am pretty confident that we are going to see interesting evolutions on future releases thanks to your contributions. Therefore, I would like to start thanking you all for help and support.

However, it is time to start explaining some concepts more in detail and showing up more complex examples. So, let’s start with the challenge: let’s run a Social Media Campaign from PowerShell!

Defining the scope …

For the purpose of this exercise, we are going to define our “project” as a broadcasting campaign run by someone that needs to know the exact impact of the activities that have been developed:

  1. under the scope of that campaign
  2. in the Social Media channels – namely, Facebook, Twitter, LinkedIn, etc. – that will participate on it.

In order to achieve it, we have to capture the metrics associated to each post that gets pushed on these channels as a result of every action of our broadcasting campaign. In other words, if there is additional activity on them we need to be able to ignore it and identify those posts that strictly belong to our campaign. To be able to do so, we will refer to the Permalink that each post will, hopefully, have on each of these channels.

What can we get and what we can’t …

There are many things that we want to know. Once we start looking at Social Media channels we will soon realize that there is data that we can directly take from them. However, we will also see that, in order to turn that into meaningful information, we need to bind it to or meld it with other attributes that uniquely belong to our particular context.

Ok, this is turning too abstract, let’s consider one simple example. Imagine that we we are only broadcasting messages to Twitter:

  • Our campaign might contain messages addressing different “themes”. Maybe, we need to aggregate information by that criteria. To do so, we should be able to “label” each tweet with the right theme.
  • Each “Theme” might contain different “stories” and these, might be composed by different “messages” or tweets …
  • Similarly, our campaign might be sharing different types of deliverables: blog posts, images, e-books, whitepapers, polls, videos, webinars, etc. What if we want to analyze our campaign by “Deliverable Type”? … Exactly, we should be able to “label” each tweet with the right “Deliverable Type”.

I guess that you can see a pattern here: you need to manually “label” each post with all these additional contextual attributes that matter to you. The key word here, however, is “manually”. Why? because, even though you can try to infer some of these attributes or categories automatically, most of the times this is not possible. And, even when it is, you might find yourself discovering that the way to do it is not reliable nor consistent and you have to be ready to deal with the errors by hand.

Manual work sucks. Fortunately, we can mitigate this significantly using the right tools. Microsoft Excel or Microsoft Access, to name just two of them, shine at this. They are both widely available and will help you with functions like drag & drop, copy & paste, grouping, filtering, auto-typing, data validation rules, pivot tables, data templating, and many others.

I know, this is not an ideal situation, but now consider these other scenarios:

  1. Ignoring those context attributes at all. Ok, I agree, this doesn’t seem to be a viable alternative …
  2. Applying these labels by hand through the web interface of your Social Media tool. My experience with this approach has shown me that this is not a productive nor an scalable solution. Hopefully, this will change in the future but, unfortunately, we are not even close.
  3. Planning in advance all your campaign elements in detail with all those attributes defined so that you can import them on your cloud-based Social Media tool … Oh, wait! To do that you will use Excel! So, why bother doing the job twice? ;-).Additionally, we can’t assume that we can plan every situation. Planning is great and mature and highly structured organizations will tend to do it very well. However, we can very well be involved with teams or face situations that force us to adapt our ways. One good example of this was Oreo’s reaction to the blackout during the last Super Bowl

Anyway, I am not telling that Excel is the right tool for the job, but it is a useful example to visualize that Excel-like editing facilities are needed to deal with this kind of tasks.

The secret sauce …

Ok, with all these considerations in mind we are ready to begin running our Social Media campaign in PowerShell with the help of the Social Media Scripting Framework.

However, before we start, I would like to share with you one little secret: no matter what you do, you will find yourself following the next cycle. So, rather than memorizing “commands”, I would recommend interiorizing this chart. It is way easier, believe me:

SMSF-InformationAndProcessCycle-20130415-1

Basically, you can follow one of these strategies:

  1. Go to each Social Network, grab each post with its metrics and persist the results in Excel.
  2. Insert each permalink in Excel, by hand or using the previous strategy, and let PowerShell create a “virtual Time Line” that will only contain those posts whose permalinks you had in your sheet. Once you have that, you can collect the metrics and persist the results back to your workbook.

Essentially, neither of them are mutually exclusive. Therefore you can switch back and forth at your convenience. The only thing that you must know in advance is how each Social Network deals with its Permalinks. For example, Facebook has more than one valid representation for a post. That particularity, advises to use the first strategy in the first place.

Running the campaign …

Let’s start by opening our Excel campaign workbook:

$excel              = New-ExcelInstance
$book               = Open-ExcelFile         -instance $excel -file .\SMCampaignControl-demo-201303-0.xlsx

$CampaignSheetId    = Get-ExcelSheetIdByName -book $book -name 'campaign'
$CampaignDataSource = Get-ExcelSheet         -book $book -id $CampaignSheetId

$CampaignInfoSchema = Get-ExcelHeaders       -sheet $CampaignDataSource

$campaign           = Load-ExcelDataSet      -sheet $CampaignDataSource -schema $CampaignInfoSchema  # ETC: 15 minutes.

Close-ExcelFile -instance $excel -book $book

There are three interesting things to notice here. The first one is the somewhat unavoidable ceremony that it takes dealing with Excel. You can blame me for that because I wanted to provide flexibility when handling different Excel layouts.

The good news is that, thanks of real-life experience and your feedback, I’ve realized that making this process more simple and straightforward is more than interesting. So, hopefully, you should expect significant improvements on future releases on this particular chapter.

The second one is observing how slow Excel operations can be. In this case, it took around 15 minutes to load 141 posts (rows) from my workbook.

This clearly means that future releases will have to implement different strategies so that these activities become far more “tolerable”… As you can see, my to-do list is growing fast! :-D.

In fact, if you think that, for some reason, you will have to close your PowerShell window before you have finished your work, I would recommend you to save your dataset to disk. That way, it will be quicker and easier for you to restart the process later on:

$campaign | Save-DataSet –source excel –label demo
$campaign = Load-DataSet .\DataSet-excel-demo-2013326-1323-4-552.xml

Finally, on the last sentence you can see that I close the Excel file once I have loaded the information from it. This practice will provide you a more reliable experience on your Excel operations.

The reasons of this are quite technical and are out of the scope of this post. However, if you are interested in going deep on this particular topic, just let me know.

Ok, now that we have our dataset loaded, let’s have a quick look at it for a moment. Just remember that you don’t have to do this in order to run your campaign: we are doing it just for educational purposes.

$campaign.count   # 141 posts

$campaign | Get-Member

   TypeName: Deserialized.System.Management.Automation.PSCustomObject

Name                      MemberType   Definition
----                      ----------   ----------
Equals                    Method       bool Equals(System.Object obj)
GetHashCode               Method       int GetHashCode()
GetType                   Method       type GetType()
ToString                  Method       string ToString()
Approved                  NoteProperty System.String Approved=
Audience                  NoteProperty System.String Audience=
Author                    NoteProperty System.String Author=
Campaign_Content          NoteProperty System.String Campaign_Content=
Campaign_Medium           NoteProperty System.String Campaign_Medium=
Campaign_Name             NoteProperty System.String Campaign_Name=
Campaign_Source           NoteProperty System.String Campaign_Source=
Campaign_Term             NoteProperty System.String Campaign_Term=
Campaign_URL              NoteProperty System.String Campaign_URL=
Channel                   NoteProperty System.String Channel=
Clicks                    NoteProperty System.String Clicks=
Content_Source_Team       NoteProperty System.String Content_Source_Team=
Conversations             NoteProperty System.String Conversations=
Deadline                  NoteProperty System.String Deadline=
Deliverable_Title         NoteProperty System.String Deliverable_Title=
Deliverable_Tracking_Code NoteProperty System.String Deliverable_Tracking_Code=
Deliverable_Type          NoteProperty System.String Deliverable_Type=
Description               NoteProperty System.String Description=
Done                      NoteProperty System.String Done=
Downloads                 NoteProperty System.String Downloads=
Keywords                  NoteProperty System.String Keywords=
LastUpdateDate            NoteProperty System.String LastUpdateDate=
Likes                     NoteProperty System.String Likes=
ObjectId                  NoteProperty System.Int32 ObjectId=3
Post_PermaLink_URL        NoteProperty System.String Post_PermaLink_URL=
Publisher                 NoteProperty System.String Publisher=
Publishing_Date           NoteProperty System.String Publishing_Date=
Scope                     NoteProperty System.String Scope=
Short_URL                 NoteProperty System.String Short_URL=
Short_URL_Stats           NoteProperty System.String Short_URL_Stats=N/D
SME                       NoteProperty System.String SME=
Story_Text                NoteProperty System.String Story_Text=
Subchannel                NoteProperty System.String Subchannel=
Tags                      NoteProperty System.String Tags=
Target_Audience           NoteProperty System.String Target_Audience=
Target_URL                NoteProperty System.String Target_URL=
Theme                     NoteProperty System.String Theme=
Time_Scope                NoteProperty System.String Time_Scope=

$campaign | group channel | sort count -descending | format-table Count, Name -autoSize

Count Name
----- ----
   43
   28 LinkedIn
   28 twitter
   23 facebook
   10 Flickr
    8 Atos Blog
    1 SlideShare

As you can see, my Social Media campaign contains information from posts pushed to different channels. So, in this case, instead of downloading a complete timeline from each Social Network, I will create a “Virtual Timeline” out of the Permalinks that each element of my campaign has.

Here is how it gets done:

$TwitterTL  = Rebuild-TwitterTimeLine -from $campaign # 7 minutes
$FacebookTL = Rebuild-FBTimeLine      -from $campaign # 2 minutes
$LinkedInTL = Rebuild-LINTimeLine     -from $campaign # 5 minutes

$TwitterTL.count  # 25 posts
$FacebookTL.count # 13 posts
$LinkedInTL.count # 16 posts

Now that we have the Timeline, we are ready to get the full set of metrics associated to them. To do so we will “Normalize” each timeline. The normalization process involves two main activities:

  • Adjusting the original data structures and formats that each Social Network returns so that it is easier to deal with later on.
  • Including all the additional information that didn’t come with the original request for whatever reason.

At this point, knowing what happens behind the scenes helps a little bit. For example, on twitter, getting information about the retweets of a post, involves calling a very slow API. Therefore, it is a very good idea to download this information only for those tweets that we actually know that have been retweeted.

$TwitterTL | where { $_.retweet_count -gt 0 } | Measure # 24 posts

$NormalizedTwitterTL  = Normalize-TwitterTimeLine $TwitterTL  -IncludeAll  # ETC: 53 minutes
$NormalizedFacebookTL = Normalize-FBTimeLine      $FacebookTL -IncludeAll  # ETC: 1 minutes

You may have noticed that the LinkedIn Timeline doesn’t need to be normalized. This is due to the fact that the current implementation of the framework takes the information via web scraping and not through the API.

I am already working to bring more consistency to this particular aspect of user experience. Additionally, I am also exploring additional abstractions and strategies to address this particular step of the process. Hopefully it will be improved significantly in future releases ;-).

We are almost there! Now that we have the metrics, we can update our campaign accordingly:

$UpdatedCampaign  = Update-DataSet $campaign -with $NormalizedTwitterTL  -using $TwitterRules
$UpdatedCampaign += Update-DataSet $campaign -with $NormalizedFacebookTL -using $FacebookRules
$UpdatedCampaign += Update-DataSet $campaign -with $LinkedInTL           -using $LinkedInRules

$UpdatedCampaign  = $UpdatedCampaign -ne $null # workaround for a known issue

$UpdatedCampaign.count # 54

At this point you might be wondering, what those “rules” are and where do they come from. Well, they are defined on the SMSF-settings.ps1 file and each one contains the necessary mapping rules to automatically match each Social Network information with your Excel dataset.

Going deep into the mapping rules is out of the scope of this blog post. However, they are pretty powerful and flexible. If you are interested in knowing how they work, I invite you to check out the in-line “documentation” that is included on the same SMSF-settings.ps1 file.

Now we have all the metrics and Social Media information perfectly mapped with the information that came from Excel. It has taken seconds for something that usually takes hours of human-intensive work… It looks nice, doesn’t it?

Anyhow, let’s have a look at the differences between the original campaign and the updated one. You can actually skip this step, of course, but it is very useful to know how to spot the differences, isn’t it?

$campaign | Format-Table ObjectId, Channel, Likes, Conversations, Audience, Clicks, Post_PermaLink_URL -auto
$UpdatedCampaign | Format-Table ObjectId, Likes, Conversations, Audience, Clicks, Post_PermaLink_URL -auto

Finally, we need to persist the data back to Excel to close the loop. We will do it by following almost the same path that we did when we loaded the information in the first place:

$excel              = New-ExcelInstance
$book               = Open-ExcelFile          -instance $excel -file .\SMCampaignControl-demo-201303-0.xlsx

$CampaignSheetId    = Get-ExcelSheetIdByName  -book $book -name 'campaign'
$CampaignDataSource = Get-ExcelSheet          -book $book -id $CampaignSheetId

$CampaignInfoSchema = Get-ExcelHeaders        -sheet $CampaignDataSource

Save-ExcelDataSet   -DataSet $UpdatedCampaign -sheet $CampaignDataSource -schema $CampaignInfoSchema # 6 mins.
Save-ExcelFile      -book $book

Close-ExcelFile     -instance $excel -book $book

Last thoughts …

It is obvious that there is still a lot of work to do. This is, actually an example of how it feels working with the first working prototype. However, you can also see all the pieces working together solving a real problem that, until now, it is mostly addressed by labor-intensive processes. I wish that I’ve been successful on visualizing, not only the problem, but also the difference that gets accomplished by using the tools and techniques described above.

This initial release has taught me a lot of things and your feedback has been invaluable. Thanks to your comments and contributions now I have lots of notes that, hopefully, will make future releases better than I originally expected. Therefore, I would like to thank you all for your contributions and, of course, I would love to continue listening to your feedback.

Published in Projects Social Media

License

Creative Commons License
Except where otherwise noted, ThinkInBig by Carlos Veira Lorenzo is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.