Friday, July 16, 2010

Spam Pre-Tests Suck Wind - a Five-Hour Saga of Frustration

Yesterday, I spent five hours (and our development team joined me for 2 of those hours) attempting to get the Spam Score on a legit marketing email down from an original score of 8 to below the recommended threshold of five.

Is it just me, or is that timeframe WAY out of line?

Here's what happened:
The email in question involved a client who runs an auto dealership in California. It was a standard sales and service email marketing piece - current deals, a finance app, terms & conditions... nothing out of the ordinary at all. Due to the nature of the client's business, there were of course the usual red flags in the content (financial wording, "Deal", "Offer" "Special" repeated ad nauseum, etc.), so I did tweak the content a bit prior to loading it into our email deployment solution to be tested.

Upon loading the code into the messenger system, I ran the in-solution spam score testing, which is based on SpamAssassin's algorithms (this makes sense, as SpamAssassin's algorithms are widely used by many filters). That's when things started to go south.

The spam score for the original piece was an 8. An 8! I've done email marketing for over 10 years, and I have not once seen a score as high as an eight for a legitimate marketing piece. I have had scores over the recommended 5 before, but it was always for a reason that was easily rectified, such as a specific, obvious trigger word or phrase. Having done this for so long, I'm pretty familiar with what sets these filters off.

But in this case, the main causes of the high score were codes I'd never dealt with before: FB_GET_MEDS (BODY: Looks like trying to sell meds), and MPART_ALT_DIFF_COUNT (BODY: HTML and text parts are different). The first thing I did (being the easiest issue to address) was to go through and change the ALT tags on all the images in the HTML to match text from the text version. Reran the test - no dice. After about 15 minutes of tweaks, I finally copied a paragraph directly from the text version and pasted it into the bottom of the HTML. Lo and behold, that error was no longer on the report.

But the score was still a 5.1, and unacceptable. Since the 1.5 point penalty for have a low text-to-image ratio was going to stay regardless, since the HTML was so image heavy, I turned to the problem of the Meds rule violation. That rule was adding a hefty 3.6 point penalty, so it had to be addressed prior to sending.

Since there were no mentions of Cialis or Viagara or anything remotely close to that, I figured, well, maybe the section discussing free diagnostic fees was the trigger? I changed it to "free code check fees". No change.

After about an hour of this, I was fed up. So I enlisted the help of the tech team. After explaining my conundrum to the CTO, he asked a couple of the developers to take a look, as well as our HTML programmer. They threw out possibilities, and I continued to test each one.

Our frustration built as we made dozens of content changes, including: removing many of the "special" "offer" "coupon" type phrasing; taking out many of the adjectives (one sentence said "big" and "huge" very close together... perhaps that was causing the problem?; removing % signs and $ signs; even editing the word "enjoy" - just in case. Nothing dropped the score below 5.

Finally, after 3 hours and a missed deadline, we found the culprit: one of terms I was using for Google Analytics tagging, "ViewOnline" was the trigger term. When we changed it to "WebView", the score dropped to 1.5.

My takeaways from this horror show:

  1. What the Spam Score tool says the error is, is not always the cause of the violation.

  2. SpamAssassin's support truly stinks. I tried checking their Wiki, Googling, searching their site... while I understand that they don't want to reveal the actual trigger words to spammers, I expected to at least find another marketer who shared the same frustrations. But I didn't. So I'm writing one myself!

  3. Don't dismiss any seemingly harebrained roots for these violations. Who knew that a section of a GA tag would trip a spam filter, let alone one that dealt with meds?

  4. Document, document, document. I wrote down each violation and how we solved it, and am building an internal Wiki for our future emails.


Have your own tale of woe trying to solve spam trap issues? I'd love to hear about it. Please post it in the comments section so we can all help one another out.



2 comments:

  1. Here is where the testing begins. Choose 300 recipients from the list. Send 100 marketing pieces to one group, 100 to another group and 100 to the third. Keep as many of the variables constant as you can. Send the tests on the same day and be sure the same type of stamp or permit is used. Be precise: developing an effective direct mail campaign requires it.

    email spam test

    ReplyDelete
  2. We just had FB_GET_MEDS triggered by "Online-Media". The rule is really awfully written (its definition can be found online).

    ReplyDelete