This paper explores the determining of which factors/variables, and the optimal levels of these factors that lead toward successful online posts in a B2B context. Using real data from a software development company’s official social media outlets, data made available only to the authors, we conducted a fractional-factorial design with two dependent (output) variables, which were measures of success: number of impressions, and number of actions. We examined the impact of six independent variables (“factors”) and some selected interactions of these factors on the two output measures. The factors are: day of post, time of day of post, presence of an image, presence of a hashtag, length of the message, and specific channel used. Three of the six factors were significant when analyzing number of impressions, while none of the factors made the 5% significance level when analyzing number of actions.
Keywords: Social media posts, B2B, Content marketing, Fractional-factorial designs, Interaction effects, LinkedIn, Twitter.
Received: 8 August 2017 / Revised: 12 September 2017 / Accepted: 28 September 2017 / Published: 9 October 2017
The paper contributes the first means-ends analysis of the connections of in-class simulation-based learning-performance by using the learning setting and participant-authored reports. Contrasted with previous studies which were essentially oriented in product/service decision-making, this study was built on the efforts to enhance the understanding of how the virtual management practices contribute to generation of personal values, rather than to focus on simulation system’s external validity.
What makes a great social media post? Social media managers must answer this central question if they are to be successful in driving traffic to their content and products. Website traffic leads to higher conversion rates, and a high conversion rate is, after all, what justifies the investment in social media in the first place.
Today, the average person browsing the Internet has an attention span shorter than a goldfish. Social media marketers need to capture a reader’s attention in just 8 seconds. Goldfish are believed to have an attention span of 9 seconds (Watson, 2015). Given this information, it is imperative for marketers to understand not only how to create sensational, useful and compelling content, but also how to ensure that its distribution is as effective as possible.
Can we deduce a set of factors, such that the combination of optimal levels of these factors leads to creation of a successful online post as often as possible? What exactly are the factors, and the level of these factors, that make a certain post sufficiently attractive to an audience, so that they would actually click a link, go to a different website, and in some instances, even read the content and follow the call to action? Is it an image? The timing? The length of the text? Having a hashtag?
The answer to this question is vital to a company’s social media and content strategy. Many social media marketers have attempted to answer this question (Smith et al., 2015). Following a discussion of the industry’s best practices, our paper discusses an experiment conducted to identify the specific factors that affect success of a business-to-business social media post.
The data for the experiment was obtained from a single software development company. The company gave the researchers exclusive permission to use the company’s official social-media outlets to conduct the experiment. The company’s plan was to integrate the findings into their future online marketing strategy.
Social media marketing is a sub-section of content marketing. The focus of content marketing is to create content, such as articles, blog posts, white papers, case studies and user guides that are helpful to their audience, explain their product or service, and make the audience’s life easier by providing industry-related information. In turn, they gain traffic, brand visibility, positioning as a thought leader in their field, and eventually, conversions.
One of the easiest and cheapest ways for companies to distribute content is through social media. Many studies have concerned themselves with which channels work best. While Facebook is king in some areas, many executives, and consequently businesses, have focused on networks such as LinkedIn and Twitter instead (Newman, 2016). Indeed, these two networks will be under closer study here.
The field of social media analytics is new and parallels the recent rise of social media and personal blogging platforms. The success of a social media post is notoriously hard to measure and quantify. How do you measure split-second decisions and interest?However difficult, measuring the effect of social media is not impossible. Chris Murdough, Vice President and Associate President of Digital Analytics at Mullen, a Boston, Massachusetts advertising agency, writes that the key is to understand why a company engages with social media (Murdough, 2009). The assessment of social media starts with selecting company-specific goals, setting the right KPIs, and understanding how to deploy and optimize content (Murdough, 2009; Sterne, 2010). Despite the difficulties of measuring the impact of social media posts (ranging from size of audience, choice of platform, and the type of content, itself), the effects of a social media post can be measured successfully.
It is strongly contested as to what makes the perfect post. Statistical and scientific study of this field is becoming more common. One recent study used regression analysis and Analysis of Variance testing to understand the factors behind viral social media posts (De Almeida et al., 2016). The authors concluded that the most significant factors behind whether a post is shared or not is (1) if it was created by a fan, and (2) if it contained promotional offers. While these findings are extremely valuable for content-marketing strategies with a focus on the consumer (which, indeed, was their focus) and the use of Facebook, the conclusions have limited use in a business-to-business environment.
It is still possible to receive significant value from social media for companies who focus on C-level executives and corporate decision-makers as their core audience – firstly by focusing on other social media platforms. As a general population, CEOs and executives are under-represented in social media. When they do use these channels, they tend to choose LinkedIn and Twitter (Newman, 2016). For this reason, the two networks discussed in this paper will be LinkedIn, the world’s largest online professional network, and Twitter, a micro-blogging platform frequented by industry thought-leaders and executives. Not coincidentally, these two information-sharing platforms are also ones used by the software development company whose networks were used to run the experiment below.
Over the years, the social media marketing community has identified several useful metrics to assess the effectiveness of social media in reaching the right audience. Their main purpose is to measure the company’s business objectives (Sterne, 2010). Metrics for reaching an optimal audience can include things such as impressions (number of views the post gets), number of actions taken (comments, sharing, likes, etc.), and engagement % - the number of actions (x100) divided by total impressions (Dodson, 2016). These are partially the result of the design of the social networks. Based on the KPIs of the company whose networks were used, the number of impressions and number of actions will be the focus of this experiment.
2.1. Best Practices for Linkedin Posts
LinkedIn is one of the world’s largest online professional networks, with a database of 400 million professional contacts. It focuses on building professional relationships and job hunting. In 2016, the Microsoft Corporation acquired it; they wish to extend it as part of their new vision for a professional cloud (Mims, 2016). Posts on LinkedIn tend to be business-focused. The content posted can last a surprising amount of time on users’ feeds – up to 24 hours, especially if shared by the poster’s network (Sheptoski, 2014).
LinkedIn has been in the social media toolbox for a long time and many theories and best practices have been developed as to which posts are most successful and when. Unfortunately, many such studies end with a huge disclaimer reading: Make sure to customize these findings to your own audience (Lee, 2014; Spasojevic et al., 2015). This statement alone was enough for the software development company whose social media channels are studied below to investigate the best options for posting on social media using their own audience. The consensus for the best LinkedIn post is as follows:
Given its focus on professional networking, LinkedIn is one of the best social media platforms for a business-to-business environment.
2.2. Best Practices for Twitter Posts
Twitter is a micro-blogging platform founded in 2007. In the third quarter of 2016, 317 million people worldwide used the platform (Statista, 2016). It is one of the fastest ways to share information today, largely thanks to its unique feature of limiting messages to only 140 characters. However, due to this feature, posts on Twitter do not last very long – on average their lifespan is about 18 minutes (Sheptoski, 2014). These values may change with the heavy, publicized, use of tweets by the new U.S. president, Donald J. Trump.
Individuals and businesses frequently engage with Twitter to interact with an audience and disseminate thought leadership. In fact, Twitter has changed the way the Internet community communicates when it successfully implemented the concept of a hashtag. A hashtag is a short phrase preceded by a # sign. It is a way to categorize a piece of information, making it easy to find information about the same subject (Dodson, 2016). Since its successful implementation on Twitter, the hashtag has become a common occurrence on other social media (with the major exception of LinkedIn).
Conventional wisdom suggests that the perfect post on Twitter looks like this:
Due to its reputation as a quick and entertaining platform, Twitter has become the information dissemination channel for many business executives and corporations.
Following the advice of the many marketing management companies and experts, a software development company wanted to test what posts will work best on their platforms. Content distribution has been a particular issue for them and their goal is to incorporate the findings of this study into their marketing strategy going forward.
3.1. Experimental Design
Social media professionals use a variety of tools to analyze the results of their campaigns. Our experiment makes use of some of these platforms and uses the experimental-design approach to determine significant factors in the real-life audience reception of social media posts. The experimental-design approach will be used as an alternative to A/B testing, an approach that dominates the industry (Evans, 2010). Traditional A/B testing is a variation of a controlled experiment that compares two levels of a factor, A and B, and looks for a statistically significant difference between the two levels. Unlike A/B testing, experimental design allows an alternative approach where many factors and their levels can be studied simultaneously. It also allows testing of interaction effects between factors (Berger and Maurer, 2002).
The idea behind our experiment was as follows: choose factors that may influence a user’s decision to view and interact with the content, and decide on the levels of those factors to study. As a result, six factors were chosen, each having two levels, as will be described; two dependent/response variables (“Y1” and “Y2”) were selected as measures of output. Based on a 2-level fractional-factorial design, we carefully determined 16 combinations of levels of factors (out of the 26 = 64 possible combinations; details to be discussed subsequently.) Each combination was implemented on a single post on the company’s social media channels; the response numbers (the Y1 and Y2) for each post were recorded.
3.2. Dependent Variables and Important Factors
As noted, there were two dependent variables of interest in this study. Both dealt with the reception of business-related social media posts:
These two variables are related. The more impressions a post receives, typically, the more actions it will gain. Indeed, there cannot be an action without, first, an impression.
The factors (essentially, “independent variables”) include six factors at two levels each; by tradition, we call the two levels: “low (L)” and “high (H);” which level is called L, and which level is called H is arbitrary. Each factor represents a specific feature of a social media post, and roughly corresponds to the social media platform best practices outlined above. They were: (1) type of day/day of the week, (2) social media channel, (3) presence of an image, (4) time of day, (5) length of message, and (6) presence of a hashtag. The six factors, and their respective low and high levels, are displayed in Table 1.
Table-1. Six factors studied with descriptions for low and high levels
Name |
Factor |
Low |
High |
A |
Type of Day/Day of the week |
Weekend (Sat, Sun) |
Workday (Thu, Fri) |
B |
Social Media Channel |
LinkedIn |
Twitter |
C |
Image present |
No |
Yes |
D |
Time of Day |
Afternoon (3-6pm) |
Morning (7-10am) |
E |
Length of Message |
Long (at least 70 characters) |
Short (under 70 characters) |
F |
Hashtag present |
No |
Yes |
Source: Construction by the authors based on discussion and analysis
Of interest in this study was the assessment of all the main effects and a selected number of two-way interaction effects. Based on the literature and discussion with the client, the authors decided that not only will the main effects of the factors likely be significant (at least for number of impressions, Y1), but also, certain two-factor interactions could not be ruled out (i.e., could not confidently be assumed to equal zero.) There were seven such two-factor interactions (out of 15 possible two-factor interactions effects; the other eight were comfortably assumed to be zero.) And, as usual, all higher-order interactions were assumed to equal zero. These are the seven two-factor interactions not assumed zero, going into the analysis:
In general, there is little direct evidence about the signs of interaction effects, if, indeed, they are non-zero. In other words, there is often literature that suggests whether a main effect is significant, and, if so, its direction. For interaction effects, this is most often not the case and, indeed, the answer is discovered only by experimentation – exactly what we do in this paper. 1
3.3. Data Collection and the Experiment Design
Given the six factors at two levels each, there were 64 possible combinations of social media posts. Since the experiment was run across two social media channels, testing all 64 combinations would mean comparing 32 posts on each medium. Regardless of the ease and low costs of posting, it would be highly inefficient to post all 32 combinations on a single social channel. Since a real-live audience was used, such an experiment would cause confusion, risk losing audience members, and in some cases, even “spam” follower feeds. Because of these realities, a “quarter replicate” was designed. This allowed us the ability to estimate the 6 main effects and 7 selected interaction effects by running only 16 combinations (one-fourth [i.e., a quarter] of 64). In general, it is not guaranteed that this can be accomplished, but the principles of experimental design, in this case, of 2-level fractional-factorial designs, indicated that we were able to accomplish this goal. Details are provided later.
The experiment was run over 4 days: two days for the low and high categories of “day,” respectively. The “low” level was run on two weekend days – Saturday and Sunday, and the “high” level was run on two weekdays – Thursday and Friday. The amount of impressions and actions were collected at the end of each day. Both networks had a similar potential reach in terms of the size of its audience. The LinkedIn network had about 1,700 followers, while the Twitter network had about 1,400 followers. (Out of context, these numbers may appear to be somewhat low; however, recall that this is a B2B situation involving a not-so-large company.)
Furthermore, it was impossible to overlook the question of the actual content shared. To minimize the impact of the actual topic of the post, the company’s top six blog posts in three major subject categories were chosen. This reduced the number of posts shared per day to twelve. So, three articles were shared each day, and both test days contained articles dealing with three similar subjects. This was done for two reasons: First, it helped dilute the number of very similar posts on one platform by using different pieces of content. It also served as replication in the analysis. The posts were created and automated in advance, using the Buffer social media sharing-tool. To minimize the impact of copy, the texts did not vary from one combination to another. Which 16 combinations of a post were actually run? The combinations were carefully selected to adhere to a 26-2 experimental design (this is the standard notation for a designed experiment with 6 factors, each having two levels, and choosing to run only a well-chosen 16 – one-fourth of the 26 possibilities). (Of course, confirming consistent notation, 2-2 = ¼). The 16 combinations run are listed in Table 2. Details of the design process are described in Appendix 1.
We include the standard notation for describing these combinations, in order to make it easier to tie Table 2 together with the description in Appendix 1. If a letter is present, the factor is at H level; if the letter is not present, the factor is at L level. For example, the last of the 16 combinations listed in Table 2 (note: the order of listing the combinations is arbitrary) is “cdf.” This means that factors A, B, and E are at low level, while factors C, D, and F are at high level. Of course, each combination has one of the levels from each of the six factors. If all factors are at low level, the symbol, “1,” is traditionally used – first combination listed in Table 2.
Table-2. The 16 Low/High social media post combinations run
Principal Block | Combination Low | Combination High |
1 |
Weekend, LinkedIn, Image=NO, Afternoon, Long message, Hash=NO |
None |
bd |
Weekend, Image=NO, Long message, Hash=NO |
Twitter, Morning |
ac |
LinkedIn, Afternoon, Long message, Hash=NO |
Workday, Image=YES |
abcd |
Long message, Hash=NO |
Workday, Twitter, Image=YES, Morning |
be |
Weekend, Image=NO, Afternoon, Hash=NO |
Twitter, Short message |
de |
Weekend, LinkedIn, Image=NO, Hash=NO |
Morning, Short message |
abce |
Afternoon, Hash=NO |
Workday, Twitter, Image=YES, Short message |
acde |
LinkedIn, Hash=NO |
Workday, Image=YES, Morning, Short message |
aef |
LinkedIn, Image=NO, Afternoon |
Workday, Short message, Hash=YES |
abdef |
Image=NO |
Workday, Twitter, Morning, Short message, Hash=YES |
cef |
Weekend, LinkedIn, Afternoon |
Image= YES, Short message, Hash=YES |
bcdef |
Weekend |
Twitter, Image=YES, Morning, Short message, Hash=YES |
abf |
Image=NO, Afternoon, Long message |
Workday, Twitter, Hash=YES |
adf |
LinkedIn, Image=NO, Long message |
Workday, Morning, Hash=YES |
bcf |
Weekend, Image=NO, Long message |
Twitter, Image=YES, Hash=YES |
cdf |
Weekend, LinkedIn, Long message |
Image=YES, Morning, Hash=YES |
Source: Construction by the authors based on discussion and analysis
Given the factors and the body of knowledge, the authors expected all of the main effects to be significant (with, perhaps, not as much confidence about factor F). We were not certain about which interaction effects might dominate, but would not have been surprised at the significance of any of the seven, or at the non-significance of any of them.
After conducting the experiment and collecting the data, Analysis of Variance (ANOVA) was used to test for the statistical significance of the effects, for each of the two dependent measures, Y1 and Y2.
4.1. Impressions
The ANOVA table for the analysis of Y1, impressions, is shown in Table 3.
Table-3. Analysis of variance and significance for Y1 - Impressions
Source of variability | SSQ | DF | MSQ | Fcalc | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TOTAL
|
145755
|
13
|
11212
|
4.00
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Error | 229769 | 82 | 2802 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Total | 375524 | 95 | Fcrit (1, 82) = 3.96 |
Source: Construction by the authors based on discussion and analysis
Comparing the F-values of each effect to the F-critical value (1, 82) = 3.96 at α = 0.05, we found that factors A (type of day/day of the week), B (social medial channel), D (time of day), and AB (interaction of social media channel and type of day/day of the week) were significant (shown bold with asterisk in Table 3). The specific p-values (to 3 digits) are: p-value-A = .000, p-value-B = .014, p-value-D = .007, and p-value-AB = .006.
Notably, three of the four significant effects were related to time (A, D, AB). Unlike our expectations, the main effects of factors C (presence of an image), E (length of message) and F (presence of a hashtag) were not significant. Also, only one of the (a priori) 7 potentially-non-zero interaction effects was significant (i.e., deemed to be non-zero.)
To interpret the effects, we look at the actual effect values before scaling and squaring them (as appropriate to enter into the ANOVA table) and refer back to the low / high definitions of each factor as described in Table 1. For a main effect of a factor, these are calculated by taking the difference between the average yield when the factor is at high level minus the average yield when the factor is at low level. For a two-factor interaction effect, we take the main effect of one factor (say, A) when the second factor (say, B) is held high and subtract from it the main effect of A when B is held low. These effects are listed in Table 4 for the significant effects; as noted, the main effects represent the change in average number of impressions as we go from the low level of the factor to the high level of the factor. We shall describe the meaning of the AB interaction effect below.
Table-4. Values of the significant effects
Effect | Value |
A | 50.4 |
B | -27.1 |
D | 30.1 |
AB | -30.4 |
The values in Table 4 indicate the following: As we go from Weekend (L) to Weekday (H) [Factor A], the number of impressions increases by about 50, averaged over all levels of all other factors. As we go from LinkedIn (L) to Twitter (H) [Factor B], the number of impressions decreases by about 27, again, averaged over all levels of all other factors. As we go from Morning (L) to Afternoon (H) [Factor D], impressions increase by about 30 - again, averaged over all levels of all other factors. The AB interaction value is indicating that, as we go from Weekend (AL) to Weekday (AH), the B effect is decreasing. In this case of a negative B effect, it means getting more negative. In essence, putting it in positive terms, it says that the positive gap in impressions between LinkedIn and Twitter (i.e., LinkedIn minus Twitter) is [significantly] larger on a Weekday (Thursday, or Friday) than on the Weekend (Saturday, or Sunday.)
4.2. Actions
To analyze actions, Y2, the exact same approach was taken as with Y1. It turned out that none of the effects were significant. The results are summarized in Table 5. It is pertinent to note that in many cases there was no action at all on certain posts in the collected data.
Table-5. Analysis of variance and significance for Y2 - Actions
Source of variability | SSQ | Df | MSQ | Fcalc | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Column
|
34.41
|
15 1 1 1 1 1 1 1 1 1 1 1 1 |
2.29
|
.71
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Error | 263.74 | 82 | 3.22 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Total | 295.33 | 95 | Fcrit (1, 82) = 3.96 |
Source: Construction by the authors based on discussion and analysis
At α = 0.05, and F critical value (1, 82) = 3.96, and as noted, none of the main effects of the factors or the interactions were significant predictors of actions taken on the social media posts. All had a p-value above .05. The effect with the lowest p-value (in a sense, the "nearest to significant") is the main effect of factor D [Time of Day], with a p-value = .14. All other effects had a p-value > .20. As we saw, this effect was significant for Impressions. Having no significant effects suggests that an entirely different set of factors (and interactions) may be influencing the reader’s propensity to interact with shared content.
The results for Y1 validate the industry, and the company’s current, best practices: posting on weekday mornings tends to draw most eyes to a post. There is also evidence that business-related content does better on LinkedIn, which is the preferred channel for such content. This experiment, however, was not able to answer the question of what actually influences people to take an action on a post (Y2). This may be due to the need for more data.
The conclusion for this specific firm is that while their posting schedule did draw eyes to their content, it did not yet inspire enough actions that translates to actual website traffic. In general, they should keep their current posting schedule, but shift their focus to the type of content they publish. The usefulness of the content might be more important to inspire actions, especially as the company is still in their social media growth stage. The timing of the posts had significant influence over whether the content was seen, but it had no significant impact on actual engagement, as was evident by the non-significance of the effects on the Y2-Actions variable.
Several limitations may have influenced the results. First, other factors could explain the variation in impressions and actions on social media posts (Whitcomb and Anderson, 2001). In this case, more experimentation and more data would be useful. Another possible limitation is the fact that the experiment was done using a live social media platform, perhaps creating a very variable environment. Each platform uses its own internal algorithms that are out of the publisher’s control. As a result, some posts may have benefitted from an extra promotional boost more than others. The two networks also had a similar, but not equal, amount of followers, which may have caused a portion of the disparity in reception. A plausible solution to this would be to test posts in a more controlled environment, such as creating a survey that would ask target users to rank the visual attractiveness of a post. This option, however, does vary from the real “acid test” of posting on live platforms. Future experimentation can provide more time and/or more data to identify significant factors. This can be achieved with more replication or by extending the length of the experiment. Often, eliminating outliers or extending the number of runs helps in revealing significant effects (Whitcomb and Anderson, 2001). Additional runs could give more evidence for identifying outliers, and reducing any seasonal effects.
Finally, it may be useful to replicate the experiment using a different, but similar audience. While the main prerogative behind this experiment was to test a specific audience, it may be useful to replicate the experiment using a larger audience with higher engagement. There is also a need to effectively deal with the issue of different article topics. As it is, there is the distinct possibility that action may be impacted by “what someone is interested in at the moment.” Quantifying and either amplifying or limiting the impact of the topic would likely make the results less variable, thus, potentially increasing the significance of certain effects. One potential way of dealing with this issue could be to test the type of posts without sharing any specific piece of content (share instead, for example, a single insight.) The experimental design approach used in this paper proved to very worthwhile and efficient way to study multiple factors. It should be considered as a useful and effective alternative to the industry standard of routine pair-wise A/B testing. Experimental design has the potential to enrich the field of social media marketing-analytics.
Funding: This study received no specific financial support. |
Competing Interests: The authors declare that they have no competing interests. |
Contributors/Acknowledgement: Both authors contributed equally to the conception and design of the study. |
Berger, P.D. and R.E. Maurer, 2002. Experimental design with applications in management, engineering and the sciences. Belmont, CA: Duxbury, 1.
De Almeida, I.S. Marcos, M.C. Costa, L.F. Ricardo and P.R. Scalco, 2016. Engage and attract me, then i'll share you: An analysis of the impact of post category on viral marketing in a social networking site. Revista Brasileira de Gestão de Negócios, 18(62): 545-569. View at Google Scholar | View at Publisher
Dodson, I., 2016. The art of digital marketing: The definitive guide to creating strategic, targeted and measurable online campaigns. Hoboken, NJ: Wiley Publishing.
Ellering, N., 2016. What 16 studies say about the best times to post on social media. CoSchedule. Retrieved from https://www.slideshare.net/.../what-16-studies-say-about-the-best-times-to-post-on-socialMedia [Accessed Apr 13, 2016].
Evans, D., 2010. Social media marketing: The next generation of business engagement. Hoboken, NJ: Wiley Publishing.
Kolowich, L., 2016. The best times to post on facebook, twitter, linkedin & other social media sites. Hubspot Blog. Retrieved from https://blog.hubspot.com/marketing/best-times-post-pin-tweet-socialmedia-infographic#sm.0001r3o2e418nsdgiuadhspu16ntc [Accessed January 6, 2016].
Lee, K., 2014. 7 essential LinkedIn marketing stats: When to post, what to post and how to improve. Retrieved from https://blog.bufferapp.com/7-vital-statistics-to-help-with-your-linkedin-marketing-strategy [Accessed March 24, 2014].
Mims, C., 2016. Why microsoft bought LinkedIn. Wall Street Journal.
Murdough, C., 2009. Social media measurement: It’s not impossible. Journal of Interactive Advertising, 10(1): 94-99. View at Google Scholar | View at Publisher
Newman, D., 2016. Most fortune 500 CEOs Don't use social media, and that's A-OK. Retrieved from https://www.forbes.com/sites/danielnewman/2016/02/09/most-fortune-500-ceos-dont-use-social-media-and-thats-a-ok/ [Accessed February 09, 2016].
Pollard, C., 2015. The best times to post on social media. Huffington Post. Retrieved from http://www.huffingtonpost.com/catriona-pollard/the-best-times-to-post-on_b_6990376.html [Accessed April 06, 2015].
Sheptoski, L., 2014. Are you maximizing the shelf life of your social media? Retrieved from https://www.weidert.com/whole_brain_marketing_blog/bid/206554/are-you-maximizing-the-shelf-life-of-your-socialmedia [Accessed June 3, 2014].
Smith, K.T., B.L. Janell and L.M. Smith, 2015. Social media adoption by corporations: An examination by platform, industry, size and financial performance. Academy of Marketing Studies Journal, 19(2): 127-143. View at Google Scholar
Spasojevic, N., Z. Li, A. Rao and P. Bhattacharyya, 2015. When-to-post on social networks. Retrieved from https://arxiv.org/abs/1506.02089 [Accessed June 5, 2015].
Statista, 2016. Number of monthly active twitter users worldwide from 1st quarter 2010 to 3rd quarter 2016 (in Millions). Statista. Retrieved from https://www.statista.com/statistics/282087/number-of-monthly-active-twitter-users/.
Sterne, J., 2010. Social media metrics: How to measure and optimize your marketing investment. Hoboken, NJ: Wiley Publishing.
Thomsett-Scott, B., C., 2013. Marketing with social media: A LITA guide. Chicago, Il: American Library Association Publishing.
Watson, L., 2015. Humans have shorter attention span than goldfish, thanks to smartphones. Telegraph. Retrieved from http://www.telegraph.co.uk/science/2016/03/12/humans-have-shorter-attention-span-than-goldfish-thanks-to-smart/ [Accessed May 15, 2015].
Whitcomb, P. and M.J. Anderson, 2001. Reasons for not finding significant factors. StateEase.com. Retrieved from https://www.statease.com/news/faqalert10.html [Accessed December/January 2001].
APPENDIX-1.
We provide information in this Appendix that is, admittedly, accessible only to those somewhat familiar with the field of design of experiments, and specifically with 2-level fractional-factorial designs. The purpose of this is to illustrate the actual process of assigning treatment combinations in this experiment.
Given the design of the 26-2 experiment, there are 63 effects (26-1), and we receive our results in "alias groups" of 4 effects. Three effects are lost completely, and we have 15 alias rows of 4 effects each, in total capturing the other 60 effects (Berger and Maurer, 2002). We wanted the 13 effects in which we are interested (i.e., believed to be potentially non-zero) to be in an alias row with 3 other effects that are assumed to be zero. The process for making this happen, so that each (of the 13) important effects is together with "zero," (i.e., zero, by assumption) and is thus "clean" (cleanly estimated), begins with what is called a defining relation or defining contrast.
The defining relation chosen (which, essentially, keys the entire rest of the experiment prior to data collection) was:
I = ACF = BDEF = ABCDE
This specific defining relation worked perfectly to create alias groups as follows (potentially non-zero effects are in boldface):
|
|
Each of the "= signs" will end up a "+" or a "-" as a function of which set of 16 combinations out of the 64 combinations we chose to run. We have 4 choices of sets of 16, each a "quarter-replicate" of the 26 experiment. For example, for the set of 16 we chose (discussed below), for all of the alias rows, the 1st and 3rd “=” signs become “-“ signs, and 2nd“=” sign becomes a “+” sign. Thus, the first alias row became (A – CF + ABDEF – BCDE). This value came out 50.4. Of course, we conclude the estimate of the A effect = 50.4 (see Table 4), since we are assuming that the other three effects in that alias row equal zero. Two of the alias rows (14 and 15 above) do not have, by assumption, any non-zero effects, and thus, their estimated effects were lumped together with the error term. This is why we have 82, and not 80, degrees-of-freedom for the error term.) Most important is that each of the 13 potentially important effects (bolded) are in separate alias rows.
Using the defining relation above, the four blocks of the quarter-replicate were determined, as shown in Table 6; we have omitted describing the method for determining the 4 blocks, but it is definitive that these are the 4 blocks, one of which is to be chosen to be run. The "principal block,” the block with the "everything-at-low-level" combination, was [arbitrarily] chosen to be used in the experiment. (Recall: if the letter is present, the factor is at H level, while if the letter is not there, the factor is at L level; and, the combination of “everything at low level” is denoted by the symbol "1"). For example, "bd" means the combination having factors B and D each at high level, and having factors A, C, E, and F each at low level.
Software (or hand calculations) can be used to determine the effects. Table 4, earlier, noted the values of the four effects that were significant; that these four effects are significant can be seen also in the ANOVA table for Impressions, Table 3.
1 For example, we may “know,” based on common sense and/or marketing theory, that increasing shelf space in a supermarket increases sales; we may “know” the same thing about increasing advertising of the product. However, what we would typically not know without direct experimentation, is whether the effect on sales of increasing the shelf space differs (and, if so, its direction), depending on whether or not we increase the advertising of the product.