If a Republican campaign spokeswoman says the other party’s candidate “and his special interest allies in Washington are plotting to spend over $13 million” in a race and has verified figures to support that claim and more, she should have nothing to fear from a fact-checking organization. PolitiFact, a national fact-checking effort co-sponsored by the prestigious Poynter Institute and several major daily newspapers, found the Republican spokeswoman’s claim no better than Half True.
If a Republican schools commissioner says an annual standardized test takes “less than 1 percent of the instructional time,” and the actual figure is between 0.26 percent and 0.90 percent of annual class time, a serious fact-checker wouldn’t make a different claim and check that instead. But that’s precisely how PolitiFact found the Republican commissioner’s statement False.
If a conservative advocacy group runs an ad saying Obamacare could cost “up to $2 trillion,” an honest fact-checker would look up the government’s own estimate and see that, indeed, the Congressional Budget Office puts the cost at $1.76 trillion for just the first few years.
PolitiFact is not that honest fact-checker. And these aren’t isolated cases. Once widely regarded as a unique, rigorous and reasonably independent investigator of political claims, PolitiFact now declares conservatives wrong three times more often than liberals. More pointedly, the journalism organization concludes that conservatives have flat out lied nine times more often than liberals.
If you were a fact-checker yourself, you might reasonably conclude that PolitiFact is biased — that it favors liberals over conservatives. But PolitiFact continues to assert its impartiality.
PolitiFact editor Bill Adair did not respond to interview requests. But liberals defend the organization. They gleefully point to PolitiFact’s lopsided numbers as evidence that a neutral arbiter has declared liberal politicians are more honest than their conservative counterparts.
“The Left just might be right more often (or the Right wrong more often),” writes Chris Mooney of The Nation, ”and the fact-checkers simply too competent not to reflect this — at least over long periods.”
Another liberal observer suggests the numbers would be even more lopsided but that PolitiFact has gotten tougher on liberal claims merely to preserve the appearance of impartiality. That was author Dylan Otto Krider’s explanation of PolitiFact’s 2011 Lie of the Year – that Republicans voted to “end Medicare.”
“As a non-partisan outfit, PolitiFact probably feels compelled to blow a few things the left says out of proportion or they wouldn’t look that much different than [the liberal front group] Media Matters,” Krider writes. “PolitiFact has pulled the yoke about as far as it can go without breaking, and have lost nearly all credibility on the left as a result, and they’re still not within 20 yards from the 50 yard line.”
PolitiFact finds that all political discourse fits neatly into one of six categories on what it calls its “Truth-O-Meter,” a colorful graphic that depicts PolitiFact’s conclusions about the political statements it examines — from True to False to Pants on Fire (from the well-known schoolyard chant about liars). The Pants-on-Fire tag is for claims the fact-checkers find not just false but ridiculous, and they slap conservatives with it nine times as often as liberals.
You could argue — as Mooney, Krider, Paul Krugman and others do — that PolitiFact is right: conservatives are simply stupid or prodigious liars. Or you could do what we have done: dig into PolitiFact’s strained analyses one at a time. That doesn’t illuminate the origins of the bias, but it sure reveals the mechanism by which the left-leaning organization transforms true into false and false into true.
HOW IT WORKS
Remember the three anecdotes with which we began here? PolitiFact pronounced the three claims Half True, False, and False, respectively. In each case, the fact-checkers dismissed the speaker’s claim, made up a different claim and checked that instead.
In the first example, PolitiFact Ohio reporter Joe Guillen acknowledged that Republican spokeswoman Izzy Santa said something “literally true” — that incumbent Democrat U.S. Sen. Sherrod Brown and his backers were spending $13 million in current the race. Remarkably, he still declared the statement only Half True. Guillen achieved that rhetorical sleight-of-hand by determining on his own that Santa probably meant to discuss only money spent by groups outside Brown’s control — despite the fact that her terms explicitly referred to both spending by all groups and by Brown’s campaign (“Sherrod Brown and his special interest allies” and “Brown and his supporters”). The combined total spending of Brown and his supporters was actually higher than $13 million. But if you pretend she didn’t include Brown, then you can pretend she said something wrong.
A Florida schools official was a victim of the same sort of willful refusal to acknowledge the meaning of plain English. Teachers unions and other critics had argued that annual standardized tests took precious class time from instruction. In response, Republican schools commissioner Gerard Robinson said the typical two or three tests per student per year “account for less than 1 percent of the instructional time provided during the year.” He backed it up with data showing the tests specifically took from 0.26 to 0.90 percent of annual class time. But PolitiFact Florida reporter Amy Sherman was determined to put words in Robinson’s mouth. “Robinson used the phrase ‘instructional time’ in his claim, which could fairly be interpreted to mean classroom time spent preparing for the test,” she wrote. Then she clucked about teaching to the test. Of course, “instructional time” needs no interpretation; it means “class time.” And that’s not even the phrase Sherman “interpreted.” She replaced a highly specific claim – two to three assessments per student per year – with her own concern: classroom time spent teaching to the test. Then she checked whether time spent teaching to the test was more than one percent, and failed even to establish that. And then — presto! — she ruled Robinson’s original claim False. That’s not fact-checking or even opinion journalism. It’s lying.
The Congressional Budget Office estimates that Obamacare “represents a gross cost to the federal government of $1,762 billion,” or $1.76 trillion, over the next decade, and that the costs will grow over time. Yet PolitiFact still managed to dismiss that bedrock number as something to be dismissed. In critiquing an advertisement that attacked the program’s costs, PolitiFact editor Angie Drobnic Holan wrote that “the $1.76 trillion number itself is extreme cherry-picking. It doesn’t account for the law’s tax increases, spending cuts or other cost-saving measures.” On paper, the Obama administration projects that new taxes and Medicare cuts will offset the new program’s costs for a while. But that doesn’t change the cost of “up to $2 trillion.” That would make the statement True, of course. Incidentally, the CBO’s 10-year cost figures will be closer to $3 trillion in a few years, if current forecasts prove accurate.
OFF-COURSE
PolitiFact started off straight. As a partnership of Congressional Quarterly and the Tampa Bay Times (then the St. Petersburg Times) formed in 2007, the outfit won a Pulitzer Prize for its coverage of the 2008 election. The partnership dissolved shortly after when The Poynnter Institute – the parent company of both outfits – sold off CQ.
The Florida journalists carried on alone, and their liberal tendencies became more obvious as the “Pants on Fire” rulings piled up on one side. By one count, from the end of that partnership to the end of 2011, the national PolitiFact operation has issued 119 Pants on Fire ratings for Republican or conservative claims, and only 13 for liberal or Democratic claims.
In another tally, just of claims made by elected officials, Republicans lose 64-10 over the same three-year period.
Those numbers were compiled by Bryan White, who co-founded PolitiFactBias, a blog dedicated to chronicling examples of what he considers poor reasoning, sloppy research, or bias by the PolitiFact.
In considering all rulings where a claim is found untrue (False and Pants on Fire rulings combined), two things are obvious: First, that PolitiFact thinks Republicans are wrong far more often than Democrats and, second, when Republicans are wrong, they’re often said to be lying, while Democrats are just mistaken.
In the three years since the end of the partnership with CQ, PolitiFact has found a total of 323 conservative claims to be untrue, with 119 of those getting Pants on Fire.
In the same time, it’s found 105 liberal claims to be untrue, with just 13 deemed Pants on Fire, according to White’s tally.
“The Pants on Fire rating tells the reader nothing about the claim other than the fact that PolitiFact finds it ridiculously false,” White said in an interview.
Prof. Eric Ostermeier at the University of Minnesota Humphrey School of Public Affairs, who examined more than 500 PolitiFact stories from January 2010 through January 2011, found the same two tendencies that White did: Republicans are called wrong more than three times as often, and when they’re found wrong, they’re more than three times as likely to be called a liar.
White marks the end of the CQ partnership as the turning point in PolitiFact’s reliability. A few examples show how the operation evolved.
In 2007, PolitiFact was checking numbers thrown around in debates, such as whether 300,000 babies annually are born deformed (False: it’s 40,000), or whether Social Security “is solid through about 2040 without any changes whatsoever” (True, in PolitiFact’s view: the system’s not going broke until 2041).
By 2010, PolitiFact was giving False ratings to statements that were true, such as U.S. Senator Rand Paul of Kentucky saying that federal workers make an average of $120,000, compared to a private sector average of $60,000. Paul used total compensation figures, which PolitiFact found misleading. The arbiters arbitrarily decided that salary alone is the valid figure, which would be news to the Internal Revenue Service.
By 2012, it was “fact-checking” extremely general statements of personal experience like this one by Paul’s father, Ron Paul, the Texas congressman and GOP presidential candidate: “I had the privilege of practicing medicine in the early ’60s before we had any government” involvement in health care. “It worked rather well, and there was nobody out in the street suffering with no medical care. But Medicare and Medicaid came in and it just expanded.”
Fact-checker Louis Jacobson tried to disprove Ron Paul’s statement, but eventually admitted his limits. It’s the only example we’ve seen of PolitiFact admitting that the truth was too complex or beyond the scope of the Truth-O-Meter treatment.
METERING WHAT?
Part of the problem is that PolitiFact sometimes investigates claims that, like Ron Paul’s, are clearly opinion rather than fact. Take for instance the fact-checking organization’s analysis of the liberal claim that Republican support of Congressman Paul Ryan’s budget plans will “end Medicare” as we know it.
PolitiFact declared that claim its “2011 Lie of the Year.” In doing so, it provoked outrage on the left, including claims from some, such as Dylan Otto Krider, that the designation was obviously a gesture of political correctness, meant only to handle fallout from its routine bashing of conservatives.
But was PolitiFact correct? Would the Ryan plan would “end Medicare”?
Ryan’s plan is to keep Medicare intact for everyone over 55 and for those who want to stay in the program, and provide income-based vouchers for those who’d like to buy private insurance. Whether that would work is a legitimate subject for debate. But it’s not an established fact. Nor is the follow-up question — Does Ryan’s plan resemble the current Medicare regime? — a question of fact. It’s a subjective question, the answer to which depends upon whom you ask.
A lot of Democrats think it sounds nothing like the current Medicare program, and Wall Street Journal columnist James Taranto supported their right to make their case without being called liars.
“First, the claim that ‘Republicans voted to end Medicare’ is not a simple statement of fact, like ‘2+2=4’ or ‘America has 50 states,’” Taranto wrote. “Nor is it a simple false statement of fact, like ‘Jacksonville is the capital of Florida’ or ‘Barack Obama was born in Switzerland.’ It is, rather, an assertion that combines elements of fact (Republicans did vote), interpretation (‘end Medicare’ means different things to different people) and prediction (about how the Ryan plan, if enacted, would work out in practice). That is to say, it is a statement of opinion.
“Second, while it may well be the case that the statements PolitiFact criticizes were made in bad faith, it is also possible that the speakers sincerely believed what they were saying — that they were arguably wrong or unfair, but not dishonest.”
THEY LIKE TO HEAR THEMSELVES TALK
The 2011 Lie of the Year illustrates the problem with blaming “ignorant” Republicans for the one-sided results: PolitiFact is more interested in arguing politics than in running down one statistic or another. As the journalists there are more pundits than fact-checkers, their rulings tell you more about their political opinions than Republican accuracy.
Even when the fact-checkers have a straightforward assignment to vet a single number, they return with context, gray area, and a bunch of other numbers, sometimes to hilarious effect.
When a Republican Congressional candidate in Oregon named Rob Cornilles said his opponent “votes 98 percent of the time with the Democrats,” he was ready for a challenge by PolitiFact Oregon, which is run by the Portland Oregonian. Cornilles’ source for the figure was the Oregonian’s own “Your Government” website, which tracks floor votes.
Nevertheless, PolitiFact Oregon reporter Janie Har managed to find the claim just Half True, because “voting with your party 98 percent of the time doesn’t mean you necessarily voted against the other party.”
Hundreds of words of wrongheaded reasoning follow, but that’s the point. You can argue that a 98 percent party loyalty index doesn’t show partisanship (you’d be wrong, but you can argue it), but that’s opinion-mongering, not fact-checking.
During the recall campaign in Wisconsin, a conservative group called The MacIver Institute reported that officials in charge of certifying petitions “will assume every completed signature is from a valid Wisconsin elector … even if their name is Mickey Mouse or Adolf Hitler.”
MacIver posted video proof, and specified that it would be up to Gov. Scott “Walker, or other independent groups, to discover fraudulent signatures among the tens of thousands of recall forms submitted.”
All of that was accurate, but PolitiFact Wisconsin called it Mostly False. Why? Because of the wording MacIver used in the title of the video it posted to YouTube. It read “Wis. Election Officials to Accept Mickey Mouse, Hitler Signatures.” PolitiFact felt “accept” was too final a term — because outside groups such as MacIver might still uncover the fraud on their own.
But the problem isn’t that PolitiFact makes bizarre judgments. It’s that PolitiFact pretends those judgments are facts.
Facts are simple things, easy to Google, and by definition, not in dispute. If an assertion is based on “grounds insufficient to produce complete certainty,” then you have the dictionary definition of an opinion.
But PolitiFact editor Adair refuses to acknowledge that his Mostly False/Half True/Kinda Sorta rulings don’t exactly ring with complete certainty.
“We are not putting our opinion in our work. We are doing solid, journalistic research, and then reaching a conclusion. That’s not the same as opinion,” he told the Cleveland Plain Dealer’s ombudsman, who was criticizing the Truth-O-Meter used at all PolitiFact operations, including the bureau at the Plain Dealer.
For Adair, opinions are something that other people have. But there are hundreds of examples of PolitiFact using phrases such as “our view,” “our judgment,” “as we’ll argue,” “we believe,” and even one “in our opinion” – all common ways to express opinion.
Sometimes, PolitiFact offers nothing but its opinion.
For example, the fact-checkers weighed in on the recent debate over what President Obama meant when he said, “If you’ve got a business, you didn’t build that. Somebody else made that happen.”
Did the word “that” refer to the business, or to the roads/bridges/teachers mentioned just before?
The fact-checkers had not one bit of information more than anyone else who opined on the issue, but that didn’t stop them from turning it into a fact to be checked.
“We believe, as do our friends at FactCheck.org and the Washington Post Fact Checker, that Romney has seriously distorted Obama’s comments,” PolitiFact’s Louis Jacobson wrote.
As the journalist Clive Crook wrote recently in The Atlantic, “You check a fact by asking whether it is true or false. If true or false is not good enough to assess the thing you are checking, then the thing you are checking is not a fact.”
After thousands of fact-checks and millions of words, the folks at PolitiFact have yet to produce a simple yes-or-no answer to anything. That’s because they don’t write about facts – things known, done, and certain. They write about politics – about the nearly infinite possible consequences of legislation and regulation.
Like judges, they issue rulings. Unlike judges, they deny these are opinions, and that’s their problem. If you define opinion so narrowly that it no longer covers your own act of judging the honesty and motivation of thousands of strangers, the word means nothing to you.
The fact-checkers aren’t just blind to their own opinion; they are blind to the concept. They’ll cite one wonk’s opinion as dispositive, when others who are well informed disagree.
Glenn Greenwald, writing for Salon last December, obliterated a ruling of Mostly False for Ron Paul’s argument that some vague, broad terms in a new Authorization for Use of Military Force law were “very disturbing language that explicitly extends the president’s war powers to just about anybody.”
Paul’s concern was that that the text of a use-of-force authorization allowed a President — whether Barack Obama or any Republican or Democrat who would follow — to detain anyone who “substantially supports” terrorists or “associated groups.” Paul said he thought that description covered an awful lot of people.
Greenwald blasted PolitiFact for treating the opinions of two War on Terror hawks, Benjamin Wittes and Robert Chesney, as if they decided the issue.
“Just on the level of credentials, in what sense is Wittes — who, just by the way, is not a lawyer and never studied law — more of an expert on these matters than, say, Ron Paul or Kevin Drum [a writer for Mother Jones]?” Greenwald asks. “And why are the pronouncements of Robert Chesney that this AUMF language is not dangerously permissive more authoritative than the views on the same topic of ACLU lawyers or Professor [Jonathan] Hafetz [a legal scholar who has written two books on terrorist detention], who say exactly the opposite? Both Wittes and Chesney are perfectly well-versed in these issues, but so are countless others who have expressed Paul’s exact views. Why is the Wittes/Chesney opinion that these AUMF standards are perfectly narrow and trustworthy — and that’s all it is: an opinion — treated by PolitiFact as factually dispositive, while the views of Paul and those who agree with him are treated as false? That is preposterous nonsense.”
Such is PolitiFact’s alchemy: stir opinion with opinion to produce indisputable fact.
A few of the experts consulted by PolitiFact have been so burned that they’ve repudiated the organization’s findings.
Earlier this year, PolitiFact’s Louis Jacobson asked several experts whether Mitt Romney was right to say, “Our navy is smaller than it’s been since 1917. Our air force is smaller and older than any time since 1947.”
Two of them – Tom Bruscino, an assistant professor of history at the U.S. Army School of Advanced Military Studies, and Ted Bromund, a foreign affairs and security research at the Heritage Foundation – told him Romney was accurate. By ship count, the standard measure of navies, the U.S. Navy was smaller than it had been in almost a century; the Air Force doesn’t have the same sort of standard measure, but the claim appeared to be true by personnel count.
Jacobson conceded that Romney’s claim was accurate, but still gave him a Pants on Fire, because his accurate claim was “meaningless,” “glib,” “preposterous,” and “ridiculous.”
Pretending that’s not an expression of opinion is also meaningless, glib, preposterous, and ridiculous.
“My opinion, for what it is worth, is that … Romney’s base statement was factually accurate,” Bruscino wrote.
“I’m not sure if this piece was written out of malice, or if it is simply a complete misfire,” Bromund wrote. “I’ve worked with PolitiFact before, and while I’ve not agreed with previous pieces they were at least defensible.”
The Cato Institute’s Michael Cannon no longer cooperates with PolitiFact reporters, owing to the group’s Lies of the Year for 2009 and 2010.
In 2009, PolitiFact called Sarah Palin’s “death panels” the Lie of the Year, since there were, of course, no literal death panels in ObamaCare. In 2010, it called “government takeover of health care” its lie of the year.
Cannon’s reason “is not so much that each of those statements is actually factually true; it is rather that they are true for reasons that PolitiFact failed to consider. PolitiFact’s ‘death panels’ fact-check never considered whether President Obama’s contemporaneous ‘IMAC’ proposal would, under standard principles of administrative law, enable the federal government to ration care as Palin claimed. (In an August 2009 opinion piece for the Detroit Free Press, I explain how the IMAC proposal would do just that.) PolitiFact’s ‘government takeover’ fact-check hung its conclusion on the distinction between ‘public’ vs. ‘private’ health care, without considering whether that distinction might be illusory.”
The weird thing about opinions is that they tend to follow patterns. The pattern that emerges in PolitiFact’s many analyses is that Republicans are wrong far more often than their Democratic counterparts. Of course, millions of Americans are convinced that Republicans are wrong about almost everything that matters. We call them Democrats.
WHAT THE TRUTH-O-METER CAN’T MEASURE
The influential New York University professor Jay Rosen was an early and ardent supporter of PolitiFact. For a long time, he’s been advocating for a journalism that cuts through the false balance of “he said, she said” reporting, a style Edward R. Murrow once equated to balancing the views of Jesus Christ and Judas Iscariot.
But Rosen acknowledges that many issues aren’t so simple.
“Disputes can be so impenetrable, accounts so fragmentary, issues so complicated that it’s hard to locate where truth is,” he writes. “In situations like that—which I agree are common—what should journalists committed to truth-telling do? Is it incumbent on them to decide who’s right, even though it’s hard to decide who’s right?
“I would say no. It’s incumbent on them to level with the users. If that means backing up to say, ‘Actually, it’s hard to tell what happened here,’ or, ‘I’ll share with you what I know, but I don’t know who’s right.’ This may be unsatisfying to some, but it may also be the best an honest reporter can do.”
PolitiFact doesn’t admit when it can’t find an answer.
Out of scores of fact checks reviewed for this article, there was just one in which PolitiFact admitted that it couldn’t pronounce on the truthfulness of a claim; that was Ron Paul’s statement that medicine in the early ’60s “worked rather well.”
There are plenty of other supporters of PolitiFact’s journalism, which is often nuanced and provides enough information to reach a contrary opinion. It’s hard to find anybody who likes the Truth-O-Meter. It takes something that’s perfectly defensible, arguing politics, and it turns it into something ugly, a tool for slander.
PolitiFact totals up its Truth-O-Meter rankings for the people it covers, producing a sort of honesty report card. Adair used to call this “a tremendous database of independent journalism” that shows one’s “batting average” for honesty.
Political opportunists use it to malign their opponents.
Early this summer, Loretta Weinberg, a Democrat and the New Jersey state Senate’s majority leader, cited PolitiFact when she claimed that the fact-checking organization “listed Gov. Scott Walker of Wisconsin as the governor who told the most lies” and “our own ‘untruthful’ Gov. Chris Christie made it into Politifact’s top five of ‘Lie-en governors.’”
But few governors have been fact-checked even once, and there is no such PolitiFact list — those are facts you can check. Weinberg’s suggestion that there is a list — and that Walker and Christie are on it — drew a rebuke from PolitiFact editor Adair. There are “report cards,” he wrote in a June 7 piece blowing up Weinberg, and those report cards “provide a tally of the claims we chose to check. But it’s not accurate to say the report cards indicate who ‘told the most lies.’”
Not two months later, however, PolitiFact’s own Tom Feran wrote an article in the Cleveland Plain Dealer with this headline: “Campaign attacks give Josh Mandel Pants on Fire crown.”
Like “List,” “Crown” suggests a kind of tally. In this case, Feran offered a report card (mostly written by himself) as proof that Mandel, a Republican Senate candidate, “told the most lies,” the exact thing Adair denounced not 60 days before. But nobody at PolitiFact bothered to stop him or call out the hypocrisy.
The pseudoscientific claim is used by the Democrats exactly as much as you might expect: incessantly.
Nor is it possible to create “crowns,” or “lists” that rank, or “report cards” or “biggest-liar” awards based on PolitFact’s work. To explain why, it’s helpful to have a background in stats, like Professor Russell D. Renka of Southeast Missouri State University.
In a text on polling, Renka writes, “Any deviation from random produces biased selection, and that’s one of the hallmarks of bad polls.”
Adair admits that selection bias affects his report cards. That’s why he called out Weinberg. “We are not social scientists and are not using any kind of random sample to select statements to check,” he said.
Indeed, they’re not. Rather than pick political claims at random — like, say, throwing darts at a newspaper taped to the wall and checking the speared quotes — PolitiFact reporters and editors usually examine statements they find dubious.
However, if Adair dealt with selection bias by instituting random sampling, he’d have a bigger problem, according to his persistent critic Bryan White. “Finding somebody to read about uninteresting facts is the big problem,” he said.
One of PolitiFact’s competitors has a solution. FactCheck.org avoids the slander problem by refusing to use a gimmicky rating system like PolitiFact’s Truth-O-Meter.
“I’ve never been able to see an academically defensible way to hand out those kinds of ratings,” FactCheck’s director, Brooks Jackson, said recently.
FIRST PRINCIPLES
While the Pulitzer Prize Board gave PolitiFact its seal of approval in 2009, two years later it appeared ready to take it back. It gave the 2011 Pulitzer for commentary to Joseph Rago of the Wall Street Journal for a series of editorials on ObamaCare capped off with a scathing critique of PolitiFact.
Rago shredded PolitiFact for its 2010 Lie of the Year about the government takeover of health care.
“The regulations that PolitiFact waves off are designed to convert insurers into government contractors in the business of fulfilling political demands, with enormous implications for the future of U.S. medicine,” Rago wrote. “All citizens will be required to pay into this system, regardless of their individual needs or preferences. Sounds like a government takeover to us.”
The problem, Rago says, is that “PolitiFact’s decree is part of a larger journalistic trend that seeks to recast all political debates as matters of lies, misinformation and ‘facts,’ rather than differences of worldview or principles. PolitiFact wants to define for everyone else what qualifies as a ‘fact,’ though in political debates the facts are often legitimately in dispute.”
Or, as liberal MSNBC host Rachel Maddow put it: “You are undermining the definition of the word ‘fact’ in the English language by pretending to it in your name. The English language wants its word back.”
Her point isn’t merely rhetorical. In chasing what it believes to be true, PolitiFact has lost sight of what is known to be fact. It’s even forgotten what the word means: “an event or thing known to have happened or existed.”
It comes from the Latin factum, “a thing done or performed,” and in most of the Romance languages, the word for “fact” and “done” remains identical. Facts are in the realm of the past, of things already done. They don’t change. They’re concrete.
Yet PolitiFact is forever peering into the future, citing 10-year budget projections and expert predictions. But the future is not a thing done; it is multifarious and infinite. The past is a yes-or-no question.
As Clive Crook put it: “Once you can’t say true or false, opinion enters in.”
Truth-O-Meters enter in