Rethinking online product review usefulness

Utility check against review visibility. Distribution of ratings. Product type segmentation analysis. Feature of determining the price category. Improving buyers' purchasing decisions by permuting erroneous ratings of the current sorting algorithm.

Рубрика Маркетинг, реклама и торговля
Вид дипломная работа
Язык английский
Дата добавления 26.08.2020
Размер файла 2,6 M

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Otherwise = 0

Binary

13

Measurement indicators

measurement_indicators

A review includes qualitative assessment of either the product itself or its features

This variable is measured with detecting a set of specific strings in the body of the review. The relating dictionary that includes all these strings can be found in the Appendix under the name {DICTIONARY == measurement_indicators }

If any section of a review contains keywords from

{ DICTIONARY == measurement_indicators } = 1

Otherwise = 0

Binary

14

Statements about delivery and/or packaging

delivery_and_packaging

The author mentions delivery process, delivery company, distributor and/or packaging of a product

This variable is measured with detecting a set of specific strings in the body of the review. The relating dictionary that includes all these strings can be found in the Appendix under the name {DICTIONARY == delivery_and_packaging }

If any section of a review contains keywords from

{ DICTIONARY == delivery_and_packaging } = 1

Otherwise = 0

Binary

15

Calls to action

call_to_action

An author attempts to persuade or to dissuade you to do something or to feel a certain way

This variable is measured with detecting a set of specific strings in the body of the review. The relating dictionary that includes all these strings can be found in the Appendix under the name {DICTIONARY == call_to_action }

If any section of a review contains keywords from

{ DICTIONARY == call_to_action } = 1

Otherwise = 0

Binary

Results

RQ #1

First of all, we should mention that when analyzing the reviews, sorted by helpfulness, we identified that their placement isn't solely dependent on the number of votes that the users left on reviews, so the validity of our initial assumptions about the sorting algorithm's unfairness weakens. However, when we plotted reviews' helpfulness positions over their visibility scores, we still saw that 0-vote reviews have no chance at being placed in the top-section.

Chart 5.

Only one product's reviews are shown here to avoid overplotting, but similar pattern is omnipresent among all products.

Chart 6

The top review section

Distributions

First set of hypotheses relates to how review scores are distributed. When we created a 1-binwidth histogram of all reviews' visibility scores, we saw an immense right-skew, with the 0-votes bin towering over all other. The overlaid density plot (the red striped line) shows us a similar picture.

Chart 7

The highest visibility score in our sample is 2805, but only 618 reviews had helpfulness score higher than 200, so to more clearly represent the information we had to truncate the plot to only include 2.67% of the visibility score range.

It can be said with certainty that the problem of review obscuration exists in Russian online markets and is similar to obscuration of its Western counterparts.

But how acute is the issue? Can we say, that most comments don't receive viewers' attention at all? It would be so, if 0-vote reviews composed more than 50% of the overall review base. Based on the histogram, 0-vote reviews make up only 29.54% of all reviews on average, which is massive, but nowhere near 50% that we aimed for. Only when we group together 0-vote, 1-vote, 2-vote and 3-vote reviews we get the number of 53.74% of their total number. So, to see if the hypothesized relationship holds for any individual product type, we plotted the cumulative proportions of all reviews below a certain number of votes, while also controlling for the type of the underlying product. The fat red line, horizontally dissecting the graph in two, represents the midpoint - 50% of all reviews. Initially we see that all of the lowest bright-red lines (that represent all 0-vote reviews exclusively) among all product types locate below the 50% threshold. However, this isn't yet a reason for celebration as we have to add just a couple more votes to have review majority:

Chart 8.

For technically simple and cheap products 0- and 1-vote reviews make up ~51% of all their respective reviews

For very popular products all 0-, 1- and 2-vote reviews cumulatively make up 53.71% of all reviews

For other popularity categories and for both experience and search products 0-, 1-, 2- and 3-vote reviews make up from 50.83% to 53.92%.

For mid-range priced and technically complex products 0-, 1-, 2-, 3- and 4-vote reviews make up 53.59% and 53.66% respectively

And finally, the highest scoring overall category is “expensive products”, that we assumed to be over 20,000 RUB. For them you have to go as high as 7-vote reviews to surpass the threshold (7-and-less-vote reviews attribute to 51.06% of all reviews for expensive products)

The results make sense and are congruent with our predictions: for buyers, expensive products demand accepting higher financial risk, so they make sure, that a large portion of other people's experiences was satisfactory. Popular products on the other hand, just contain too many reviews altogether for a person with any level of caution to be able to review them all. The differences between experience and search products doesn't seem to be that significant. The review section of technically uncomplicated products is reviewed less, most likely because people are already cognizant of what to expect and don't need the advice of others as often.

Each colored line represents the number of all reviews below a certain vote threshold, divided by the overall number of reviews. So, for example, the blue “7” includes only those reviews whose visibility score (the number of reviews they have) is 0, 1, 2, 3, 4, 5, 6 or 7. The red line represents the 2nd quartile (half of all reviews).

Next, we'll test if viewers are more prone to rank negative reviews, rather than positive ones. First, we'll get the general picture of what vote distributions each star-rating group of reviews have (graph below)

Chart 13.

The dots in the middle are mean number of votes for the group.

The overall trend is seemingly clear: the more negative (indicated by the lower star-rating) a review is, the more votes (both positive and negative) it receives on average. The almost linear negative relationship between the star-rating and review's visibility score is shown: 1-star reviews on average receive 26.83 votes and 5-star reviews get almost 2.4 times less of that - 11.21. Another interesting observation is that the more negative a review becomes the smoother its distribution gets as depicted by the shape of the violin plots, meaning that either one of these statements is correct:

negative reviews have much less unvoted reviews than the positive ones

negative reviews have proportionally more super-top-reviews reaching hundreeds or even thousands of votes than positive ones and either of these statements supports our assumption that negative reviews invoke voting response more often than positive ones.

Chart 14.

Based on the data distribution, even given the effect all crazy 3,000-vote approaching outliers have on the means, it still seems suspicious that some groups of reviews, distinguished by their negativity alone would have that high of an average score. To make sure that we have enough observations in each group we will take a look at the star ratings distributions of individual reviews and the aggregate ratings of the products that they come from.

Chart 15.

From the bar plots [Chart 15] and [Chart 16] we can also see that people are hesitant to leave downvotes and instead are prone to leave positive feedback, regardless of the status quo about the product's quality.

Chart 16.

It looks like, even though we sampled almost all highly reviewed products, we still got much more positive than negative reviews and the products on the platform generally received praise. On a side note, the number of reviews in product groups of different aggregate rankings almost mimics the number of products in these groups, which means that on average products receive similar number of reviews, so we potentially could extrapolate our findings in smaller review groups over the entire product base with little expected changes in their accuracy. Consequently, this great discrepancy between the number of positive and negative reviews is simply due to our sample, containing relatively little number of 2- to 3- star-rated products (only 10 items), otherwise we would have seen a more symmetrical picture in a graph of review distributions of reviews among products of various aggregate ratings [Chart 17]. But for generally positively rated products there certainly exist some sort of positive correlation between the review negativity and its visibility, meaning people are more willing to rate them than the positive comment, as can be seen in the faceted boxplot chart below.

Only 1-star review data for the 2.0 rated product group is somewhat relewant, as for 2.0-rated products review groups of 2-5 stars don't have enouth observations to be representative. The individual points are ouliers and their number is representative of our zero-inflated distribution]

Chart 17.

Chart 18.

Judging by the size of the “box area” negative reviews consistently have significantly higher kurtosis, which in our case of super-right-skewed distributions mean much less unrated reviews and their 1st quartile never reaches 0 with its bottom. We can also say that negative reviews remain the leaders in motivating people to vote regardless of the aggregate public sentiment about the product (in the future we might also refer to it as “Status Quo” or “Product Rating”)

Chart 19

So, perhaps the heightened perception and willingness to rate negative reviews is attributable to their controversy? If most of 1- and 2- star reviews come from mostly positively ranked products (more than 96% of all negative reviews) then one might expect that viewers, who generally hold positive outlook towards the product will more often disagree than agree, which will lead to negative reviews being downvoted. To see if we are correct, we will create a segmented boxplot chart similar to Chart 18, but this time measured against the negative and positive aspects that make up reviews' visibility scores.

Chart 20.

Indeed, if we contrast between the positive and negative feedback that negative reviews receive, we can see that for positively rated products negative reviews receive critique, while receiving more and more agreement for <4-stared products. Peculiarly, even though they are more often downvoted than upvoted for 4-, 4.5- and 5-star products, they are still upvoted more than positive reviews on average in those very product groups! So, even though people are not as prone to leave negative comments about their user experience, due to their relatively low (compared to positive reviews) numbers, they are prone to react sharply to the negative experience of others. It's almost like there exist an audience that filters out all, but 1- and 2-star reviews. Similar, but much weaker reverse trend can be seen for the positive reviews:

So, as you go up the product ratings, negative reviews' average number of downvotes grows. The same is true if you go the opposite direction for positive reviews. This crisscross formation is another evidence that you will less likely be supported if you go against the status-quo. One weird relationship is that, while negative reviews receive approximately the same average amount of upvotes among all product ratings, positive reviews gain even more traction! We can not explain this phenomenon, but this fact gives hope for the products with lower levels of aggregate user ratings (product ratings). Positive reviews on average are always less popular, but are more beneficial for products of all ratings.

Finally, to be fully certain, that the differences we identified are statistically significant enough, we ran several dozens t-test between all permutations of reviews, grouped by their star-ratings, while controlling for the underlying product's rating [Chart 21]. We see, that most differences occur in positively rated reviews (3.5 stars on average and up), where (1) the raw numbers of 4- and 5-star reviews become so massive that it becomes impossible that their means wouldn't differ from the other groups and (2) 1- and 2-star (negative) reviews differ from every other group (but themselves in 5.0 group).

We conclude that controversial opinions motivate people to give more attention to hyper-negative reviews and a little more attention to hyper-positive ones. Feedback for positive reviews is predominantly positive and predominantly negative for negative reviews in positively rated products. The trends are reversed for the reviews of poorly rated products, but there exists a favor towards positive reviews, that don't suffer relatively as strong as negative reviews do in positively rated products.

Chart 21. T-tests matrix for reviews with different star rating.

To continue with our analysis of review star ratings and their effect on review visibility it is necessary to understand if people do sort by “helpfulness” and then abandon reading reviews after a certain page. “Yandex.Market.ru”, where we sampled our reviews form, there are only 10 reviews shown on each page, no matter the sorting style, and initially on the very first page you are shown 5 pages that you can list through. The options of what pages you are allowed to choose change with each opened page: save for the first and last two pages your current page is represented as a grey box surrounded by two options to go forward a page or two to its rights and by two options to go back a page or two to its left. We believe, that the first 5 pages, that are shown, many viewers regard as either the only ones worth watching (considering everything after that “garbage reviews” if they sorted by helpfulness) or sufficient enough to get a sense of what the product is about and what you should expect using it. These first 5 pages include the first 50 reviews that, if they are sorted by helpfulness (the variety we study) will be considered “the top section” and will make “an entire review base” when united with the rest of the reviews. To test if there really exist a point of oversaturation beyond which very few people read further, we will indiscriminately group all reviews by their helpfulness position - how quickly you would see a review if we list all reviews sorted by their helpfulness (as decided by the current sorting algorithm) top-to-bottom. This way, the very first review on the very first page will have the 1st helpfulness position and the first review on the fifth page will have 41st helpfulness position. In the line chart below [Chart 22] we plotted helpfulness position against the average visibility score for the reviews that have that position. We group them indiscriminately of the number of reviews each product has and only distinguish between their helpfulness position, because it makes sense to assume that no matter how much unique reviews a review page of any product may offer, viewers have only that much time and patience to read some of them, not to mention that after a certain point, the information in them would begin to repeat.

Chart 22.

The relationship is evident. The grey line, representing the visibility score (both up- and downvotes) has an area of high variance, starting approx. after the 13th 130 reviews/10 reviews per page page. This region has so many peaks because of the structure of the current ordering system: first come very helpful here, we mean our definition of “helpfulness” - reviews, then less helpful, until you reach the helpfulness ratio of 0.5 (upvotes = downvotes), then the system treats 0-vote reviews as “potentially beneficial” and places them next; only after come the reviews with more downvotes than upvotes. This isn't always correct, per se, and some individual reviews might get mixed up with the wrong group, but the general, the relationship we described holds pretty well. One evidence to that was depicted by Chart 5, in which there was a long stripe of values that were exclusively 0 with no variation. Another one can be seen again in Chart 22, where we also plotted the average upvotes of each helpfulness position (the red line): the line has a much smoother form, more accurately representing the portion of viewers, that churns after he/she read a certain number of reviews sorted by helpfulness top-to-bottom.

Chart 23

It is evident that there is some point of oversaturation somewhere between the 0th and the 70th position, where the average number of upvotes gets steady, but it is hard to tell what it is exactly with this data range. So, we zoom into that region to determine the exact point semi-accurately. Address the Chart 23 below:

Bold x-axis labels (aside from “1”) denote the last review of each page.

Now, if we extrapolate the average scores to the number of people still reading reviews after they had already read a certain number of them, we'll see that ~75% of them churn after reading just the first page, ~80% after reading the second and so on, It gets a little wonky after ~7th page, where the number of votes a review receives become stable, fluctuating a little around the 8-upvotes mark. We think this is in part due to the drastic changes this position has in the number of observations it included compared to the means of previous positions. Nonetheless, from now on, we will consider 70 reviews to be our point of oversaturation, beyond which reviews get significantly less votes than all reviews before it. The general trend for review visibility (instead of the number of upvotes) also supports this threshold, albeit less stringently.

Chart 24

Now, we will explore whether a similar dependency exist for review recency. We will group all sampled reviews by their underlying products and within those groups give an ordered number based on the time they had been posted relative to all other reviews for the same underlying product. So, for a brand-new product that has yet to have its first review written, that review will get the 1st posting order while a review written after 50 others will get 51st posting order. Address the Chart 24 below:

The red line here denotes the number of upvotes a review receives. Here we can see that fluctuations in the number of votes are not due to spikes in negativity.

It seems that the first set of reviews on average receives more recognition as time goes on, but the relationship quickly devolves into chaos after the 100th review has been posted. Even the initial 100 reviews (grouped) don't follow the curve as smoothly as they do for their helpfulness position. If you consider, that the “by date” arranging is the default method for the “review page” of every product on Yandex Market and that this sorting algorithm lists them from newest to oldest, an interesting conclusion can be drawn: newer reviews constantly refresh the review base, and therefore the default list of them sorted “by date” gets attention from people, who don't want to dive in too deep in even thinking of identifying the most helpful reviews - each review, no matter its previous visibility score or helpfulness gets a short-lived place in the limelight, where many unconcerned and unsophisticated viewers will see (and, supposedly, rank) it. But then, they get replaced with even newer reviews and no longer receive as much attention. This creates a binwidth of average visibility scores, beyond which very few newer reviews peak. As drawn by the Chart 24 above, this is approximately 15 votes max. The older reviews, having existed long enough to accumulate high ratings from both sorting options (“by date” and “by helpfulness”), continue to benefit from their high position in the top-section of most helpful reviews even after they are no longer shown on the non-oversaturating pages of the most recent reviews.

To give you some numbers, the average visibility score of reviews that had been posted for a product with 100 reviews already is 7.2, while for the first 20 reviews it is 33.6 - a 4.67 times difference! When we analyze the means of these reviews, grouped by their posting date page-wise (how they actually are shown to the user), we see in Chart 25, that in the spiked region, that doesn't benefit from favorable “by helpfulness” sorting, the highest average visibility score for a page is 9.01 on page 27. When we assume this number to be the average maximum of all reviews, posted under the conditions of oversaturation, we see a familiar page being just slightly above all the rest - the 7th one. This is the very same page that determined the point of oversaturation for the helpfulness sorting. Thus, we determine that reviews, that are featured on the first seven pages, no matter the sorting style, will consistently get more views than any review “in the back” and can be treated as the average attention span of customers.

Chart 25

We will need to account for the 2 regions in our regression model by adding a threshold indicator of “time point of oversaturation” that will monitor if the posting order of a review was small enough to capitalize on at-the-time unfilled “by helpfulness” ranking top section. The 70th posting position will become the threshold.

Now that we know that people react more vividly to extreme reviews AND that they can generally stomach only so much textual information it is time to determine, if this could become a problem for the sorting algorithm, as we wouldn't want to see more extreme reviews in our top section than there really are. Otherwise, people would get an inaccurate idea about the overall levels of satisfaction the authors (read “people in general”) had had with the product and what they should expect, if they decide to buy the item. To do so, we plot the proportion of reviews that is attributable to each star rating, and then segment our reviews by the rating of their underlying products to account for the Status-Quo effect. Address the Chart 26 below:

Chart 26.

It seems, that many values are so close together, that the size-2 ggplot2 points overlap. At the first glance, it looks like, on average, the top section fairly represents the overall public sentiment about the products. To make sure, that we didn't make any mistakes, we examine the divergence of top-section proportions from the overall review base proportions more closely.

Chart 27 draws a similar picture: the differences in proportions mostly center around 0 and the majority doesn't leave the ±1% area. The distribution of these differences seems almost random and it is more likely that it is random indeed. The only notable observations here are the 3- and 5-star rated products, that delegate 2.5% and 3.8% less space to 5-star reviews respectively, but even they can be easily disregarded - even if you give extra 5% of space to the 5-star reviews, they will barely get 2 extra reviews in the top-section.

Chart 27.

At this moment, we have everything we need to run our first regressions to see, what factors influence people to give reviews a watch.

Instantly, we face a problem: when we control for product's popularity, price, type and complexity, not only do we only explain 19.77% of the visibility scores with our variables (adj-R2), but a lot of those, that are statistically significant have coefficients with very questionable directions (take “images”, for example). segmentation price purchasing sorting

Call:

lm(helpfulness ~ posting_order * oversaturation_point_passed + longer_comm + not_segmented + every_section + verified + anonymous + default_avatar + images + verified + user_experience + word_count * word_threshold_passed + pros_average_words_per_BP + cons_average_words_per_BP + any_digit_BP + any_math_BP + any_emoji_BP + Home_applience + Computers_and_media + NUM_OF_REVIEWS + Popularity + Type + Complexity + Price_category + PRODUCT_CATEGORY + helpfulness_position:helpfulness_rating_over_70th_position + status_quo_divergence:factor(PRODUCT_RATING),

data = training, na.action = na.exclude)

Residuals:

Min 1Q Median 3Q Max

-129.10 -11.51 -2.39 5.49 2733.58

Coefficients: We could not change the font of the tables to “Times New Roman” because it breaks the table-like aesthetic and jumbles up the data

Estimate Std. Error t value Pr(>|t|)

(Intercept) 48.130705 1.519342 31.679 < 0.0000000000000002 ***

posting_order -0.472003 0.009189 -51.367 < 0.0000000000000002 ***

oversaturation_point_passedTRUE -27.760673 0.484720 -57.272 < 0.0000000000000002 ***

longer_commTRUE 0.238537 0.278883 0.855 0.392372

not_segmented 0.180186 0.563596 0.320 0.749190

every_section -0.071256 0.344165 -0.207 0.835979

verifiedTRUE -2.446348 0.422455 -5.791 0.000000007030759236 ***

anonymousTRUE 3.537255 0.369474 9.574 < 0.0000000000000002 ***

default_avatar -0.400685 0.270084 -1.484 0.137930

images1 -1.546580 0.541005 -2.859 0.004255 **

images3 -1.477994 0.821586 -1.799 0.072030 .

images4 -4.146572 1.113165 -3.725 0.000195 ***

images5 -2.419247 1.650127 -1.466 0.142626

images6 -3.515370 2.304571 -1.525 0.127166

images7+ 5.650992 2.028148 2.786 0.005333 **

user_experienceMore than a year -0.536467 0.358887 -1.495 0.134969

user_experienceSeveral months -1.694417 0.282920 -5.989 0.000000002119546319 ***

word_count 0.077153 0.002045 37.722 < 0.0000000000000002 ***

word_threshold_passedTRUE -15.349668 5.648529 -2.717 0.006580 **

pros_average_words_per_BP 0.134034 0.059693 2.245 0.024744 *

cons_average_words_per_BP -0.119060 0.043951 -2.709 0.006751 **

any_digit_BPTRUE 4.122629 0.820633 5.024 0.000000507852730578 ***

any_math_BPTRUE -1.713212 0.756852 -2.264 0.023601

any_emoji_BPTRUE -0.549959 7.074244 -0.078 0.938034

Home_applience -14.177172 0.597098 -23.743 < 0.0000000000000002 ***

Not_electricTRUE -15.012699 0.924200 -16.244 < 0.0000000000000002 ***

NUM_OF_REVIEWS 0.047610 0.001385 34.382 < 0.0000000000000002 ***

PopularityHigh -3.929363 0.388700 -10.109 < 0.0000000000000002 ***

PopularityMedium -7.108033 0.506078 -14.045 < 0.0000000000000002 ***

PopularityLow -18.409837 0.947438 -19.431 < 0.0000000000000002 ***

Typesearch 6.054237 0.283131 21.383 < 0.0000000000000002 ***

ComplexitySimple 9.001977 0.587582 15.320 < 0.0000000000000002 ***

Price_categoryLow -16.946000 0.417896 -40.551 < 0.0000000000000002 ***

Price_categoryMedium -9.819444 0.393388 -24.961 < 0.0000000000000002 ***

factor(stars)2 -5.983008 0.761771 -7.854 0.000000000000004075 ***

factor(stars)3 -7.631585 0.942377 -8.098 0.000000000000000565 ***

factor(stars)4 -7.612501 1.167293 -6.522 0.000000000070004465 ***

factor(stars)5 -6.593306 1.080072 -6.105 0.000000001035719692 ***

posting_order:

oversaturation_point_passedTRUE 0.439887 0.009286 47.373 < 0.0000000000000002 ***

word_count:

word_threshold_passedTRUE 0.076612 0.010883 7.040 0.000000000001940711 ***

helpfulness_position:

helpfulness_rating_over

_70th_positionFALSE -0.162094 0.006720 -24.123 < 0.0000000000000002

helpfulness_position:

helpfulness_rating_over

_70th_positionTRUE -0.053963 0.001529 -35.303 < 0.0000000000000002

status_quo_divergence:

factor(PRODUCT_RATING)2.5 -3.493996 1.156160 -3.022 0.002511 **

status_quo_divergence:

factor(PRODUCT_RATING)3 1.924359 1.305278 1.474 0.140407

status_quo_divergence:

factor(PRODUCT_RATING)3.5 2.465621 0.518904 4.752 0.000002021528856261 ***

status_quo_divergence:

factor(PRODUCT_RATING)4 3.438095 0.427998 8.033 0.000000000000000964 ***

status_quo_divergence:

factor(PRODUCT_RATING)4.5 5.510247 0.378233 14.568 < 0.0000000000000002 ***

status_quo_divergence:

factor(PRODUCT_RATING)5 3.744494 0.727767 5.145 0.000000267863438808 ***

Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1

Residual standard error: 35.2 on 83047 degrees of freedom

Multiple R-squared: 0.1981, Adjusted R-squared: 0.1977

F-statistic: 436.6 on 47 and 83047 DF, p-value: < 0.00000000000000022

rmse of the test sample 32.33031

As you can see, our attempted groupings of product categories into “home appliances”, “computers and media” and “not electric” tell us that some product categories (namely, the default-for-the-model “computers and media”) indeed have reviews with much greater number of votes on average. However, we believe our makeshift grouping isn't enough.

We think, that we should also include the underlying product's category (as it is advertised in the catalogue of Yandex Market), because when we were scraping data, we had seen product categories, whose products had hundreds of reviews, but their top-reviews barely got 5-10 votes. Teas and coffee, for example are especially notable, because we had seen some, that had 300-400 reviews but only a handful of them had non-0 votes. Many other categories suffer from the same issue of hyper-0 inflated distributions.

And so, we correct our model, by adding 61 more control variables for product categories:

Call:

lm(helpfulness ~ posting_order * oversaturation_point_passed + longer_comm + not_segmented + every_section + verified + anonymous + default_avatar + images + verified + user_experience + word_count * word_threshold_passed + pros_average_words_per_BP + cons_average_words_per_BP + any_digit_BP + any_math_BP + any_emoji_BP + Home_applience + Computers_and_media + NUM_OF_REVIEWS + Popularity + Type + Complexity + Price_category + PRODUCT_CATEGORY + helpfulness_position:helpfulness_rating_over_70th_position + status_quo_divergence:factor(PRODUCT_RATING) + PRODUCT_CATEGORY,

data = training, na.action = na.exclude)

Residuals:

Min 1Q Median 3Q Max

-126.39 -10.25 -1.53 5.96 2688.71

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 43.899560 3.202755 13.707 < 0.0000000000000002 ***

posting_order -0.477833 0.008862 -53.921 < 0.0000000000000002 ***

oversaturation_point_passedTRUE -28.287468 0.468630 -60.362 < 0.0000000000000002 ***

posting_order:

oversaturation_point_passedTRUE 0.446602 0.008955 49.870 < 0.0000000000000002 ***

helpfulness_position:

helpfulness_rating_over_70th_positionFALSE -0.160225 0.006483 -24.714 < 0.0000000000000002 ***

helpfulness_position:

helpfulness_rating_over_70th_positionTRUE -0.052213 0.001476 -35.381 < 0.0000000000000002 ***

longer_commTRUE 0.047489 0.270795 0.175 0.860791

not_segmented -1.393419 0.565928 -2.462 0.013811 *

every_section 0.822913 0.332726 2.473 0.013391 *

verifiedTRUE -5.050519 0.423683 -11.921 < 0.0000000000000002 ***

anonymousTRUE -1.402352 0.364401 -3.848 0.000119 ***

default_avatar -0.354188 0.261352 -1.355 0.175354

images1 -0.547370 0.531996 -1.029 0.303531

images3 -0.820989 0.799430 -1.027 0.304438

images4 -2.202214 1.667081 -1.321 0.186505

images5 -1.722909 1.593704 -1.081 0.279668

images6 -1.896322 2.222824 -0.853 0.393599

images7+ 5.841756 1.956313 2.986 0.002826 **

user_experienceMore than a year -2.343326 0.352571 -6.646 0.0000000000302225 ***

user_experienceSeveral months -1.891279 0.274585 -6.888 0.0000000000057076 ***

word_count 0.077882 0.001990 39.131 < 0.0000000000000002 ***

word_threshold_passedTRUE -19.429025 5.443850 -3.569 0.000359 ***

word_count:

word_threshold_passedTRUE 0.082221 0.010493 7.836 0.0000000000000047 ***

pros_average_words_per_BP 0.124953 0.057542 2.172 0.029895 *

cons_average_words_per_BP -0.089094 0.042377 -2.102 0.035520 *

any_digit_BPTRUE 3.803026 0.791397 4.805 0.0000015466434061 ***

any_math_BPTRUE -1.618371 0.729760 -2.218 0.026580 *

any_emoji_BPTRUE 3.190823 6.818374 0.468 0.639804

Home_applience -1.491167 5.285458 -0.282 0.777847

Not_electricTRUE 0.898375 6.186420 0.145 0.884540

NUM_OF_REVIEWS 0.051070 0.001428 35.760 < 0.0000000000000002

PopularityHigh -2.825431 0.400568 -7.054 0.0000000000017576 ***

PopularityMedium -5.357493 0.529463 -10.119 < 0.0000000000000002 ***

PopularityLow -10.519910 0.971753 -10.826 < 0.0000000000000002 ***

Typesearch 4.742984 1.960826 2.419 0.015571 *

ComplexitySimple -10.439888 4.712483 -2.215 0.026737 *

Price_categoryLow -4.478692 0.522494 -8.572 < 0.0000000000000002 ***

Price_categoryMedium -2.462816 0.428579 -5.746 0.0000000091442939

PRODUCT_CATEGORY air humidifiers 1.892986 1.423446 1.330 0.183568

PRODUCT_CATEGORY baby carriers 3.353488 2.377920 1.410 0.158466

PRODUCT_CATEGORY cameras 53.733503 2.182075 24.625 < 0.0000000000000002 ***

PRODUCT_CATEGORY car cameras -15.172479 2.439947 -6.218 0.0000000005047481 ***

PRODUCT_CATEGORY cellphones 2.008435 2.845678 0.706 0.480324

PRODUCT_CATEGORY coffee -2.249172 4.233303 -0.531 0.595209

PRODUCT_CATEGORY coffee machines 9.762584 1.825741 5.347 0.0000000895653745 ***

PRODUCT_CATEGORY computer acoustics 3.560306 3.027941 1.176 0.239671

PRODUCT_CATEGORY computer batteries -6.461197 3.207534 -2.014 0.043973 *

PRODUCT_CATEGORY computer chairs -3.715299 2.883795 -1.288 0.197632

PRODUCT_CATEGORY computer coolers -8.847296 2.584156 -3.424 0.000618 ***

PRODUCT_CATEGORY computer mice -12.386894 2.188677 -5.660 0.0000000152282346 ***

PRODUCT_CATEGORY dish washers 7.713708 3.320501 2.323 0.020179

PRODUCT_CATEGORY drugs -3.468104 5.913322 -0.586 0.557548

PRODUCT_CATEGORY electric toothbrushes 4.881679 1.779383 2.743 0.006081 **

PRODUCT_CATEGORY eyeshadows 2.075875 3.531611 0.588 0.556669

PRODUCT_CATEGORY freezers 5.852380 3.886954 1.506 0.132162

PRODUCT_CATEGORY gamepads -12.619182 2.556551 -4.936 0.0000007988822135 ***

PRODUCT_CATEGORY gaming consoles 7.173570 3.170243 2.263 0.023652 *

PRODUCT_CATEGORY GPS -11.135917 4.045964 -2.752 0.005918 **

PRODUCT_CATEGORY hairdryers 10.127379 2.171648 4.663 0.0000031142996488 ***

PRODUCT_CATEGORY headphones and earbuds -11.870746 2.881790 -4.119 0.0000380514433619 ***

PRODUCT_CATEGORY heaters 1.983701 1.868091 1.062 0.288290

PRODUCT_CATEGORY ironing boards -11.053594 5.547572 -1.993 0.046318 *

PRODUCT_CATEGORY irons 3.020214 2.688571 1.123 0.261291

PRODUCT_CATEGORY irrigators -0.898310 1.607169 -0.559 0.576205

PRODUCT_CATEGORY laptops -7.219005 3.068799 -2.352 0.018656 *

PRODUCT_CATEGORY lawn mowers 7.012302 3.793309 1.849 0.064519

PRODUCT_CATEGORY lightbulbs -3.509803 5.455824 -0.643 0.520023

PRODUCT_CATEGORY makeup removers -0.813522 2.621949 -0.310 0.756353

PRODUCT_CATEGORY mascaras -1.764583 3.074440 -0.574 0.566001

PRODUCT_CATEGORY mattresses -4.626415 5.398622 -0.857 0.391468

PRODUCT_CATEGORY meat grinders 4.056809 1.848111 2.195 0.028158

PRODUCT_CATEGORY men's electric razors 6.448510 1.694951 3.805 0.000142

PRODUCT_CATEGORY microwaves 3.166752 2.076190 1.525 0.127195

PRODUCT_CATEGORY milking pumps -1.670196 2.829098 -0.590 0.554949

PRODUCT_CATEGORY modems -3.807728 2.918104 -1.305 0.191943

PRODUCT_CATEGORY monitors 1.326007 3.141870 0.422 0.672994

PRODUCT_CATEGORY motor oils -12.441651 7.834374 -1.588 0.112271

PRODUCT_CATEGORY ovens 3.996918 3.082885 1.296 0.194812

PRODUCT_CATEGORY portable amps -10.079721 2.241115 -4.498 0.0000068806144960 ***

PRODUCT_CATEGORY printers 3.146563 2.471529 1.273 0.202978

PRODUCT_CATEGORY projectors -19.913267 4.619979 -4.310 0.0000163257206341 ***

PRODUCT_CATEGORY refrigerators 3.742762 2.463887 1.519 0.128754

PRODUCT_CATEGORY rumbas -12.439497 4.926517 -2.525 0.011571 *

PRODUCT_CATEGORY slow cookers 9.545807 1.420361 6.721 0.0000000000182030 ***

PRODUCT_CATEGORY

smart watches and bracelets -15.579207 2.241595 -6.950 0.0000000000036782 ***

PRODUCT_CATEGORY soundcards -2.918612 2.556369 -1.142 0.253581

PRODUCT_CATEGORY SSD -0.422310 2.203748 -0.192 0.848030

PRODUCT_CATEGORY stoves 1.470749 3.324318 0.442 0.658186

PRODUCT_CATEGORY tablets -1.917659 2.398716 -0.799 0.424031

PRODUCT_CATEGORY teapots 0.237649 1.341016 0.177 0.859339

PRODUCT_CATEGORY trimmers 9.221089 1.749475 5.271 0.0000001361868148 ***

PRODUCT_CATEGORY TV sets -13.282150 2.354018 -5.642 0.0000000168307252 ***

PRODUCT_CATEGORY vacuum cleaners 4.712874 1.407335 3.349 0.000812 ***

PRODUCT_CATEGORY VR glasses -9.322229 4.933539 -1.890 0.058820 .

PRODUCT_CATEGORY washing machines -4.917749 5.012506 -0.981 0.326548

PRODUCT_CATEGORY water heaters -8.966521 5.080351 -1.765 0.077577 .

PRODUCT_CATEGORY water pumps -14.803586 5.674018 -2.609 0.009082 **

PRODUCT_CATEGORY webcams 1.726483 4.593561 0.376 0.707030

PRODUCT_CATEGORY wi-fi and Bluetooth -6.926193 2.144060 -3.230 0.001237 **

factor(stars)2 -5.164450 0.737111 -7.006 0.0000000000024648 ***

factor(stars)3 -6.332642 0.917658 -6.901 0.0000000000052052 ***

factor(stars)4 -7.070191 1.141175 -6.196 0.0000000005835929 ***

factor(stars)5 -6.462798 1.053310 -6.136 0.0000000008516407 ***

status_quo_divergence:

factor(PRODUCT_RATING)2.5 -3.035223 1.190893 -2.549 0.010814 *

status_quo_divergence:

factor(PRODUCT_RATING)3 1.510034 1.291173 1.170 0.242203

status_quo_divergence:

factor(PRODUCT_RATING)3.5 2.981193 0.512673 5.815 0.0000000060855319 ***

status_quo_divergence:

factor(PRODUCT_RATING)4 4.865754 0.426268 11.415 < 0.0000000000000002 ***

status_quo_divergence:

factor(PRODUCT_RATING)4.5 5.550936 0.370279 14.991 < 0.0000000000000002 ***

status_quo_divergence:

factor(PRODUCT_RATING)5 4.015363 0.703486 5.708 0.0000000114826335 ***

Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1

Residual standard error: 33.91 on 82986 degrees of freedom

Multiple R-squared: 0.2563, Adjusted R-squared: 0.2553

F-statistic: 264.8 on 108 and 82986 DF, p-value: < 0.00000000000000022

rmse of the test sample 30.99829

This massive influx of additional control variables had strictly positive effect: the adjusted R2 is still pretty low, but now its 25.53%, which is 5.76% higher than before, meaning that our model became better at predicting the number of votes a review receives; the RMSE of the test sample that we ran our model through dropped 4.12% and is now at (still high) ~ 31 votes. RSEs of training and test samples are pretty close together, so we can say that the values of most of model's coefficients are not due to sampling biases.

Now, we examine the coefficients of the regression and see, how they match with our hypotheses.

Our model proves that the demand for reviews is limited and that with every review that had been posted before, your (newly posted) review will get 0.48 votes less on average. If you post after there are already 70 reviews, you can expect that the negative cumulative effect will be somewhat less, as at the 71st position you would get -33.44831 votes on average if we hadn't controlled for the oversaturation point, which also indicates the position the marginal decrease (seen for the first 70 positions in [Chart 24]) has less momentum than before, so you only hit a drop of 30.47 and each review afterwards will be penalized with only 0.03 less views. At this point a lot of reviews already have 0 votes, so it's no wonder that they no longer lose votes - they just have nothing else to lose.

Confidence level of all variables, relating to the position of a review is 99.9%.

Frankly, viewers don't really care if some sections are longer than others, as reviews with longer “comment” section than “pros” and “cons” don't have a significant enough effect of their visibility, but people do care that a review includes every section. Unsegmented reviews get 1.4 votes less on average than reviews with clearly defined sections. If a review has all 3 section it will get 0.82 more votes on average.

Both of their p-values ~ 0.014, so these factors are significant predictors at 95% confidence level.

The very first baffling observation is that reviews of authors, whose purchases had been verified by Yandex, receive 5.05 less reviews on average than those, whose weren't. This variable is significant at 99.9% confidence level.

Reviews by anonymous authors get 1.4 less votes on average than those of registered users with 99.9% confidence level. Your avatar picture doesn't matter.

Only “7+” images group of reviews is significantly different from 0 (p-value < 0.01) and can expect 5.84 votes on average than reviews without any images/photos at all. As a note, when we were collecting the data, we've noticed that almost all reviews with 7-8+ images are very well-versed and with a lot of arguments, often including a highly comprehensive analysis of an item, which can confound the significance of the number of photographs alone. Still, “7+ images” can serve as a good indicator of this comprehensiveness.

Another surprising finding is that the more experience with the product an author claim to have had, the less votes he/she receives on average: 2.34 for “More than a year of use” and 1.89 for “Several months”. Both are significant at 99.9% CL.

The more words there are in a review, the more prominent it becomes. Each 10 words add 0.79 votes and after 400th word each 10 words add 0.82 extra votes along with a flat + 13.46 votes Net of the negative interaction effect and a threshold point of 400, which we used to distinguish “very long reviews”. The variable “word_treshold_passed” was introduced later, when we were choosing the options to measure the characteristics of a review, pushed by our findings about the saturation points. Initially, when we thought that viewers are less inclined to read lengthy reviews and even more disinclined to read super-long reviews, which we ballparked to be about 400 (more than a 1 page scroll in Yandex Market w. 100% magnification), we thought that after this point, the reviews will get even less votes, but we were proven wrong and they in fact receive even more reviews on average.

Confidence level (CL) of all variables, relating to the wordiness of a review is 99.9%.

As to formatting, people are generally attracted by the numbered bullet-point styled lists of “pros” and “cons” sections and they receive 3.8 more votes on average (than reviews without any bullet-point formatting), but are slightly more hesitant to read “-“ and “+” bullet-point styled lists - reviews that feature them get 1.62 votes less on average. Emoji-style bullet-point lists don't invoke any specific response.

The average length of a point matters as well: positive sections benefit with additional 1.25 votes for every 10 words for the average number of words in a bullet-point claim, while negative sections lose 0.9 votes for the same change.

Confidence level of all variables, relating to the bullet-point formats are at least 95.

Next come our attempts to reduce the necessary number of variables, controlling for product's category with the division of them into 3 groups based on their engineering complexity: “Home appliances” (dishwashers, vacuum cleaners, stoves, air conditioners, etc.), “Computers and media” (denoted laptops, TVs, computes, cellphones and other complex products; is excluded from the model to avoid multicollinearity issues) and “Not electronic” (everything else, like cosmetics and food items). As can be seen with extremely high p-values of these variables, they were outshadowed by more accurate individual product category variables.

The overall number of reviews for a product, however, can serve as a direct uncategorized indicator for the product's popularity and, obviously, the more popular a product is, the more attention will its reviews receive on average. Each 10 reviews on the product indicate the increase in expected average number of votes on ALL reviews by 0.51 votes (99.9% CL).

Categorized indicators work just as well: reviews of 200+ review product (Very highly popular) are the base value among which others are compared and thus is excluded from the model. Reviews of 100-199 review products (Highly popular) get 2.83 votes less on average, of 50-99 review products (Medium popularity level) - 5.36 votes less and <50 review products (Unpopular products) - 10.52 votes less. CL 99.9% for all.

Reviews for search goods get on average 4.74 more votes than those of experience goods, reviews for complex products get on average 10.44 more votes than reviews for simple products, and reviews for cheap and mid-range products get on average 4.48 and 2.46 less votes respectively than reviews for expensive products.

Among the effect, the underlying product's category has on review's visibility we will only speak of those, whose CL is greater than 95%. It is better. All values are compared against the average votes, received by the reviews on air conditioners (a product category, whose reviews have 10.5 votes on average).

Reviews for professional video- and photo cameras get 53.73 more votes on average. This group includes the majority of the in-the-thousands outliers.

Reviews for car videocameras, on the other hand, get 15.17 votes less.

Reviews for coffee machines get 9.76 more votes.

Reviews for computer batteries get 6.46 less votes.

Reviews for computer coolers 9.85 less votes.

Reviews for computer mice 12.62 less votes.

Reviews for dish washers get 7.71 more votes.

Reviews for electric toothbrushes get 4.88 more votes.

Reviews for gamepads and joysticks get 12.39 less votes.

Reviews for gaming consoles get 7.17 more votes.

Reviews for GPS-navigators get 11.14 less votes.

Reviews for hairdryers get 10.13 more votes.

Reviews for headphones and earbuds get 11.87 less votes.

Reviews for ironing boards get 11.05 less votes.

Reviews for laptops get 7.22 less votes.

Reviews for meat grinders get 4.06 more votes.

Reviews for men's electric razors get 6.45 more votes.

Reviews for portable amps get 10.08 less votes.

Reviews for projectors get 19.91 less votes.

Reviews for rumbas get 12.44 less votes.

Reviews for slow cookers get 9.55 more votes.

Reviews for smart watches and bracelets get 15.58 more votes.

Reviews for trimmers get 9.22 more votes.

Reviews for TV sets get 13.3 less votes.

Reviews for vacuum cleaners get 4.71 more votes.

Reviews for water pumps get 14.8 more votes.

Reviews for Wi-fi devices get 6.93 less votes.

Among the reviews of different star-ratings 5-star reviews get 6.46 less votes on average than 1-star reviews, 4-star reviews - 7.07 votes less, 3-star reviews - 6.33 votes less and 2-star reviews - 5.16 less votes. This supports our hypothesis that people specifically filter out all, but the most negative reviews.

When it comes to the differences between a review's star-rating of a product and that product's aggregate rating from all users, the results speak more about the inflated visibility of negative reviews, as people seem to only be interested in controversial opinions of condemning variety: reviews for the products with high aggregate ratings (4-, 4.5- and 5-stars) that have high status quo divergence (almost exclusively negative reviews), which receive 4.02 more (5.0-rated products), 5.55 more (4.5-rated products) and 4.87 more (4.0-rated products) reviews on average for each star that they differ from the status-quo. This relationship reverses for the poor-rated products (2.5-stars) that penalize positive reviews (the main sources of divergence for this group) with 3.04 votes on average for each star they differ from the status quo. Sadly, we didn't have enough observation of 2.0-rated products to further validate our findings.

All, except for 3.0-star rated products control groups are significant at least at 95% CL.

We conclude, that the least visible review groups are short reviews, reviews posted after a large number of them had already been posted and positive reviews, especially those that are congruent with the public sentiment about the underlying product's quality.

Among the least reviewed are such products as car cameras, computer mice, gamepads, ironing boards, portable amps, projectors, rumbas, TV sets and water pumps.

Among the least reviewed are also cheap, technically simple experience products. What product come to mind, when you conform to all of these types? Correct - food items, household chemicals and other everyday consumption products. You might have noticed, that those categories of products aren't present in any of our samples. That is because we only included 40+ review products and the products of those categories rarely got any reviews at all (save for tea and coffee, but they are a special case, due to their popularity) and if they did, they never got anyone actually voting on them.

RQ #2

Now, we turn to model review helpfulness. Once again, we build a regression model with similar specifications [Model Output 3].

The results the model provided were unexpected, to say the least. We had expected a lot more significant predictors for helpfulness and at this point we start to believe that controlling for product type to predict helpfulness does more harm than good - it just inflates the number of variables in a model, whose effects either are insignificantly different from 0 or have spurious directions, that can not be reasoned by logic. So, the product typization seem to only be helpful when dealing with the absolutes of visibility scores, mainly controlling for the underlying product's popularity, and not the generalized values of helpfulness ratio.

Lengthy reviews are the most helpful and each 10 words add 0.52% to the review's helpfulness. If the review is longer than 400 words, it receives on average 26.34% higher ratio flat and then each 10 words start to generate an additional 0.4% to its helpfulness. Interestingly enough, reviews, written for technically uncomplicated products show an additional average 0.3% increase in their helpfulness and lose 0.12% for the same number of words if the author stated that he/she had had more than a year to experience a product.

Newer reviews also proved to be more helpful than older ones. To measure their recency, we used their posting order. With each 10 reviews, that has already been posted on the underlying product, new reviews become 0.4% more helpful. However, this is only valid before the point of oversaturation has been breached. After it reviews generally become on average 0.133% less helpful and it keeps decreasing at a very insignificant rate. It also seems that for simple products specifically the point of oversaturation is even closer than 70 reviews, as reviews for technically uncomplicated products lose 0.28% extra from their helpfulness scores with each review, posted before

Anonymously written reviews show 1.31% lower helpfulness scores on average.

“Verified” tag helpfulness predictor above the review offers us very confusing relationships with review's helpfulness on different types of products: it indicates a loss of 3.5% loss in helpfulness for cheap products, a gain of 1.38% for mid-range products and is insignificant for expensive products. We speculate, that even though its p-values are less than 0.05, provided estimates are still logically likely due to chance or some weird confounder that is responsible for the majority of the described effect. Although, when combined with an author-claimed experience of more than 1 year and several months with the product, it indicates an average increase in helpfulness of 8.43% and 3.02% respectively

...

Подобные документы

  • История и причины для размещения product placement. Виды размещения product placement: визуальный; вербальный; кинестетический. Отношение читательской аудитории к размещению торговой марки в книгах. Плюсы и минусы российского книжного product placement.

    курсовая работа [40,9 K], добавлен 24.11.2010

  • Историческое развитие и современное состояние Product Placement. Скрытая реклама в СМИ. Практическое применение Product Placement как инструмента маркетингового PR в РФ. Социологическое исследование Product Placement в российском кино, его преимущества.

    курсовая работа [332,4 K], добавлен 09.06.2014

  • Research tastes and preferences of consumers. Segmenting the market. Development of product concept and determine its characteristic. Calculating the optimal price at which the firm will maximize profits. Formation of optimal goods distribution.

    курсовая работа [4,4 M], добавлен 09.08.2014

  • Product Placement в книжных изданиях: виды, преимущества и недостатки. Характеристика отечественного рынка книжной продукции: основные игроки. Популярные жанры художественной литературы и авторы для размещения Product Placement, их целевая аудитория.

    дипломная работа [119,9 K], добавлен 19.07.2011

  • Purpose of the Marketing Plan. Organization Mission Statement. The main strategies employed by BMW. Sales volume of automobiles. New records set for revenues and earnings. Current models of BMW. Product life cycle. Engagement in Celebrity Endorsement.

    курсовая работа [879,4 K], добавлен 03.05.2015

  • История развития и характеристика основных достоинств и недостатков Product Placement в российской киноиндустрии как рекламного приёма, заключающегося в использовании реального коммерческого бренда в качестве реквизита. Применение рекламного логотипа.

    курсовая работа [98,6 K], добавлен 06.01.2011

  • Скрытая реклама, ее понятие, характеристики и виды. Product placement как разновидность скрытой рекламы и техника его эффективного применения, ее отличия от других видов рекламы. Правовые основы размещения Product placement в современной телепродукции.

    курсовая работа [895,1 K], добавлен 19.10.2010

  • История возникновения, понятие, сущность рекламы. Классификация по типу размещения, в зависимости от объекта, преимущества и недостатки РР. Правовое регулирование Product Placement. Разработка стратегии применения для продвижения продукции компании ASICS.

    дипломная работа [1,0 M], добавлен 23.08.2017

  • Общая характеристика скрытой рекламы как разновидности рекламного продукта, ее типология. Анализ Product placement, его содержание, механизмы воздействия на потребителя, сравнение с другими видами рекламы, характер использования в современном телевидении.

    дипломная работа [2,9 M], добавлен 23.11.2009

  • Мировой рынок рекламы: особенности, состояние. Основные тенденции развития российской рекламы. Product Placement (скрытая реклама) в фильме "Дневной дозор". Планирование рекламной кампании, опирающейся на образ упаковки Водка "Абсолют": абсолютный успех.

    курсовая работа [82,4 K], добавлен 17.11.2014

  • Законность размещения брендов в художественных произведениях. Понятие Product placement, определение рекламы и спонсорства. Сравнительный анализ указанных технологий. Бренды - герои фильмов. Правовые основы размещения брендов в произведениях искусства.

    курсовая работа [29,7 K], добавлен 05.02.2009

  • Исследование особенностей функционирования и развития Product Placement в компьютерных играх. Определение законности размещения торговой марки, товара или услуги в компьютерных играх с целью получения прибыли. Виды скрытой рекламы в компьютерных играх.

    курсовая работа [1,9 M], добавлен 17.06.2013

  • Product placement як різновид прихованої реклами і техніка його ефективного застосування. Правові основи розміщення брендів у творах мистецтва. Практичний аналіз художніх фільмів за наявності реклами. Розміщення певної торгової марки або самого товару.

    курсовая работа [2,3 M], добавлен 19.04.2015

  • Классификация маркетинговых стратегий, основы их формирования. Компоненты 4Р (product, price, promotion, placement) в маркетинге. Политические, экономические, социально-культурные и правовые факторы, определяющие международную маркетинговую стратегию.

    контрольная работа [32,4 K], добавлен 08.01.2017

  • Анализ особенностей технологии и основных каналов размещения Product Placement. Оценка эффективности использования данной рекламной технологии. Разработка проекта телешоу для продвижения гостиничных услуг. Исследование общественного мнения о телешоу.

    дипломная работа [261,0 K], добавлен 16.06.2013

  • Business plans are an important test of clarity of thinking and clarity of the business. Reasons for writing a business plan. Market trends and the market niche for product. Business concept, market analysis. Company organization, financial plan.

    реферат [59,4 K], добавлен 15.09.2012

  • The concept of brand capital. Total branded product name for the whole company. Nestle as the largest producer of food in the world. Characteristics of technical and economic indicators. Nestle company’s brands. SWOT-analysis and Nestle in Ukraine.

    курсовая работа [36,2 K], добавлен 17.02.2012

  • Plan of marketing. Absence of a large number of competitors. The emergence of new competitors. Active advertising policy. The absence of a known name in the market. Break even point. The price for advertising. Distribution channels and pricing policy.

    презентация [2,6 M], добавлен 15.02.2012

  • A detailed analysis of lexical-semantic features of advertising in the World Wide Web. Description of verbal and nonverbal methods used in online advertising. Bringing a sample website hosted on its various banners and advertisements to consumers.

    дипломная работа [99,7 K], добавлен 10.04.2011

  • The collection and analysis of information with a view of improving the business marketing activities. Qualitative & Quantitative Research. Interviews, Desk Research, Test Trial. Search Engines. Group interviews and focus groups, Secondary research.

    реферат [12,5 K], добавлен 17.02.2013

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.