Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

October 2010 LSAT Curve - PrepTest 61

LSAT Blog Fancy Line GraphJust wanted to let everyone interested know that I've included the October 2010 LSAT (PrepTest 61) "curve" data in my blog post containing all the other LSAT PrepTest raw score conversion charts.

The "curve" on the October exam was pretty generous. It allowed 12 incorrect answers to get a 170. (The average for September/October exams in recent years was only 10.25 incorrect answers).

The below chart contains recent data regarding the number of questions you could get wrong on recent exams and still achieve a particular scaled score (out of 180):

LSAT Blog December Curve Comparison Averages 2002-2009







This continues the trend of relatively generous curves in the most recent exams.

(See what it's taken to get an LSAT score of 160 or 170.)

How'd everyone do?


Photo by blprnt_van


The Best Answer Choice to Guess on the LSAT

LSAT Blog Best Answer Choice GuessBecause there's no guessing penalty on the LSAT, you should fill in a bubble for every answer.

I recently analyzed the LSAT PrepTest Answer Keys from several different angles.

This blog post contains my findings.

I'll start off with my most-significant findings, which you will find useful.

The rest of the blog post is the data I've analyzed, along with some less-significant findings.

Most of that data isn't too useful, but it's there if you want to look at it and obsess over the details. If you have an amazing memory, you might want to note some of the more specific findings, but the "most significant" ones are probably enough for 99% of people reading this to remember.



Most-Significant Findings

1. Overall, D is most likely to be the correct answer on the LSAT, and E is the least likely to be the correct answer.

Looking at every released PrepTest answer key from June 1991-December 2009, D is 2.1% more likely than E to be the correct answer.

(However, the variation in likelihood of each letter being the correct answer has grown less extreme over time. Looking only at the answer keys for the last 10 years, D is only 1.7% more likely than E, and over the last 5 years, D is only 1.26% more likely than E. That's still a significant-enough difference to be worth knowing, though.)


Take-away:
When guessing randomly between a few choices, if you haven't eliminated D, choose D. Don't choose E when you're down to a few choices and can't decide between them.

If you have to randomly fill in bubbles, choose D.

***
2. In the last 5 answer choices of a given section, D is more likely than others to be the correct answer. A is the least likely.

Take-away:
When guessing randomly on any of the last 5 questions in a section, if you haven't eliminated D, choose it. Whatever you do, don't choose A if guessing randomly.

If you run out of time and have to randomly fill in bubbles, choose D. The probabilities vary depending upon the section type, so feel free to look at the data below if interested in the nitty-gritty.



Answer keys from every released PrepTest, from the past 10 years, and from the past 5 years:

Using answer keys from every released LSAT PrepTest (June 1991-December 2009):


D = 21.2%
B = 20.5%
C = 20.1%
A = 19.2%
E = 19.1%


Using answer keys from every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

D = 21.2%
C = 20%
B = 19.8%
A = 19.6%
E = 19.5%


Using answer keys from every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

D = 20.8%
C = 20.2%
B = 19.8%
A = 19.8%
E = 19.5%



Answer keys by section from every PrepTest, from the past 10 years, and from the past 5 years:

Logic Games

Not too much in the way of useful trends here.

Using Logic Games answer keys from every released LSAT PrepTest (June 1991-December 2009):

B = 20.5%
E = 20.3%
D = 20.1%
C = 19.9%
A = 19.3%


Using Logic Games answer keys from every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

C = 20.6%
A = 20.4%
D = 20.3%
E = 19.5%
B = 19.2%


Using Logic Games answer keys from every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

B = 22.1%
C = 20.9%
E = 19.8%
D = 19.2%
A = 18%



Logical Reasoning

In Logical Reasoning, B and D have consistently been the most likely correct answer choices overall, over the past 10 years, and over the past 5 years.


Using Logical Reasoning answer keys from every released LSAT PrepTest (June 1991-December 2009):


D = 21.6%
B = 20.6%
C = 20.3%
E = 19%
A = 18.6%


Using Logical Reasoning answer keys from every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

B = 21.2%
D = 21.1%
C = 20.1%
E = 19.3%
A = 18.3%


Using Logical Reasoning answer keys from every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

D = 21.2%
B = 20.1%
A = 19.9%
C = 19.7%
E = 19.1%



Reading Comprehension

While D and B have been the most likely answer choices overall in RC, B has shifted to become the least likely answer choice over both the last 10 years and the last 5 years. A has risen to become the second-most common answer choice over this period.


Using Reading Comprehension answer keys from every released LSAT PrepTest (June 1991-December 2009):

D = 21.5%
B = 20.1%
A = 19.9%
C = 19.8%
E = 18.6%


Using Reading Comprehension answer keys from every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

D = 22.3%
A = 21.3%
E = 19.6%
C = 19.1%
B = 17.7%


Using Reading Comprehension answer keys from every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

D = 21.3%
A = 21%
C = 20.3%
E = 20.1%
B = 17.3%


***

Looking at the last 5 answer choices per section:


Using only last 5 answers from every released LSAT PrepTest section (June 1991-December 2009):

D = 22.1%
E = 21.3%
B = 20.2%
C = 18.8%
A = 17.7%



Using only last 5 answers from every released LSAT PrepTest section over the past 10 years (June 2000-December 2009):

D = 21.4%
B = 21.4%
E = 20.4%
C = 19.4%
A = 17.5%


Using only last 5 answers from every released LSAT PrepTest section over the past 5 years (June 2005-December 2009):

B = 22%
D = 21.3%
E = 20%
C = 20%
A = 16.67%



Please note that there is a great deal of fluctuation when looking at the last 5 answer choices per section by section type. This is likely due to the fact that we're working with a very small sample size (in the hundreds, which is very few questions compared to the number of LSAT questions overall - nearly 6,500 in total).

Logic Games

Using only last 5 answers from only the Logic Games section in every released LSAT PrepTest (June 1991-December 2009):

A = 22.5%
D = 20.3%
B = 20.3%
E = 19.4%
C = 17.5%


Using only last 5 answers from only the Logic Games section in every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

D = 22.7%
B = 22%
A = 20.7%
C = 18%
E = 16.7%


Using only last 5 answers from only the Logic Games section in every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

B = 25.3%
D = 24%
E = 20%
C = 16%
A = 14.7%



Logical Reasoning

Using only last 5 answers from only the Logical Reasoning section in every released LSAT PrepTest (June 1991-December 2009):

E = 23.3%
D = 22.7%
B = 19.5%
C = 19.5%
A = 15%


Using only last 5 answers from only the Logical Reasoning section from every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

B = 23.1%
E = 21.7%
C = 21.1%
D = 19.7%
A = 14.4%


Using only last 5 answers from only the Logical Reasoning section every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

C = 22.7%
B = 20.7%
E = 20%
D = 19.3%
A = 19.3%



Reading Comprehension

Using only last 5 answers from only the Reading Comprehension section in every released LSAT PrepTest (June 1991-December 2009):

D = 22.8%
B = 21.3%
E = 19.1%
C = 18.8%
A = 18.1%


Using only last 5 answers from only the Reading Comprehension section in every released LSAT PrepTest over the past 10 years (June 2000-December 2009):

D = 23.3%
E = 21.3%
A = 20.7%
C = 19.3%
B = 19.3%


Using only last 5 answers from only the Reading Comprehension section in every released LSAT PrepTest over the past 5 years (June 2005-December 2009):

D = 22.7%
B = 21.3%
E = 20%
C = 18.7%
A = 17.3%


Percentages may not add to 100% due to rounding.

Photo by johnwardell / CC BY-NC-ND 2.0

Chances of Same Answer Choice in a Row on the LSAT

LSAT Blog Same Answer Choice Row ChancesLooking through the LSAT PrepTest answer keys, I found only 9 instances in modern LSAT history (June 1991-December 2009) where the same answer choice appeared 4 times in a row.

2 of these instances occurred in the same LSAT section.

Now, you might have thought LSAC artificially increases the number of 4-in-a-rows to throw test-takers off - to make them second-guess themselves. However, it appears that LSAC artificially decreases the number of 4-in-a-rows. (I've explained the math supporting this below.)

First, why am I even talking about this?

1. To remind you that it's possible to have 4 of the same answer choice in a row - even for it to happen more than once in the same section.

2. To tell you that if you have 3 of the same answer choice in a row and have to randomly guess on the next question, you may want to consider guessing something other than that letter simply because LSAC appears to purposely avoid 4-in-a-row.

Of course, focus on the content of the exam above all else. Patterns and probabilities should always come second to content. However, it's still useful to be aware of them for random guessing purposes.

Note: Since the cat's now out of the bag, it's possible that LSAC may change its strategy. Don't blame me if your exam's answer key has a 4 in a row or two, but if the 64 released PrepTests are any indication, there probably won't be a single 4-in-a-row.

Cases of same letter 4-in-a-row:

PrepTest 8 (June 1993), LR1, Q4-7 - answer C
PrepTest 12 (October 1994), LR1, Q11-14- answer D
PrepTest 14 (February 1995), LG, Q13-16 - answer D
PrepTest 19 (June 1996), LR2, Q18-21 - answer C
PrepTest 22 (June 1997), LG, Q6-9 - answer B
PrepTest 22 (June 1997), LG, Q20-23 - answer E
PrepTest 24 (December 1997), LG, Q18-21 - answer E
PrepTest 36 (December 01), LR2, Q18-21 - answer B
PrepTest 45 (December 04), LG, Q7-10 - answer A


If anyone's interested in the math behind all this:

Actual occurrences where a section contained at least one sequence of 4 in a row in the 64 released LSATs = 8

Chances of (at least) one 4-in-a-row in any particular section = ~16.2%

# of sections one would expect to contain (at least) one 4-in-a-row in the 64 released LSATs = 41.472
(4 sections per exam * 64 exams) * 16.2% = 256 * 16.2% = 41.472, which is over 5 times the actual number of occurrences.

The odds of finding 4 questions in a row with the same answer is (1/5)^3

This is because, given some answer for a question, the odds that the next question will have that same answer is 1/5. Then the odds that the 3rd question will also have that answer is 1/5 and finally the odds that the 4th question will too have the same answer is also 1/5. By multiplying, you find that for a set of 4 questions each with 5 possible answers, the odds of them having the same answer is 1/5 * 1/5 * 1/5 = (1/5)^3

Statistically, 1 out of every 125 sets of 4 questions (if the answers were truly random) would have 4-in-a-row of the same answer choice.

Thus, the odds of a set of 4 questions NOT having the same answers is: 124/125

Because there are 22 sets of 4 questions in an LSAT section (questions 1-4, 2-5, 3-6…22-25), we calculate the odds that for all of those sets of 4 questions, no set share the same answers: (124/125)^22 = 83.802464%

This means that the odds of at least one of those sets of 4 questions having all the same answer is 1-.83802464 = 16.197536%

There have been 256 individual sections of the modern LSAT given. In theory, approximately 16.2% of those sections should have contained at least one string of 4 questions with the same answers. 256 * .162 = 41.472, we’ll round that down to 41 sections that should have contained a string of at least 4 questions with the same answers (statistically speaking of course).

(I said above that there are 22 sets of 4 questions in an LSAT section because there are 22 possible sequences of 4 in a section of 25 questions. Sure, there are many sections that have a greater or lesser number of questions per section, but let's assume those differences cancel each other out.)

Bottom line: There appears to be a conspiracy to decrease the number of 4-in-a-rows.

***

Discussion of the two 4-in-a-rows in PrepTest 22's Logic Games section:

If you've done a few Logic Games, you may have noticed that LSAC often presents the content of answer choices in either alphabetical or numerical order.

I find it curious that in PT22, LG, Q20, the answer choices are presented in reverse alphabetical order, leading to a 4-in-a-row sequence of Es.

Call me crazy, but I'm entertaining the possibility that this exception to the traditional alphabetical presentation of choices was intentional in order to create 2 sequences of 4-in-a-row in the same section.

There have been 256 LSAT sections, so it's not that unlikely that we'd see a case of 2 4-in-a-row sequences in the same section by now. However, given the sketchiness of the reverse alphabetical ordering in PT22, LG, Q20, I'm calling foul play.

***

Cases where girls named Becca were likely to freak out:

PrepTest 16 (September 1995), LR2, Q13-17 - BECCA
PrepTest 44 (October 2004), LR1, Q1-5 - BECCA
PrepTest 57 (June 2009), RC, Q12-16 - BECCA


Actual occurrences where a section contained at least one sequence of BECCA in the 64 released LSATs = 3

Chances of at least one BECCA in any particular section = .67%

# of sections one would expect to contain (at least) one BECCA in the 64 released LSATs = 1.7152
(4 sections per exam * 64 exams) * .67% = 256 * .67% = 1.7152, which is pretty close to the number of actual occurrences


The odds of finding a particular 5-letter sequence is (1/5) ^ 5 = 1/3125

Statistically, 1 out of every 3125 sets of 5 questions (if the answers were truly random) would have a particular 5-letter sequence.

Thus, the odds of a set of 5 questions NOT containing a particular sequence is: 3124/3125

Because there are 21 sets of 5 questions in an LSAT section (questions 1-4, 2-5, 3-6…22-25), we calculate the odds that for all of those sets of 5 questions, no set contains a particular 5-letter sequence: (3124/3125)^21 = 99.330146%

This means that the odds of at least one of those sets of 4 questions having all the same answer is 1-.99330146 = .669854%

There have been 256 individual sections of the modern LSAT given. In theory, approximately .67% of those sections should have contained at least one particular 5-letter sequence the same answers. 256 * .67% = 1.7152, we’ll round that up to 2 instances where sections that statistically should have contained a string of at least one instance of BECCA.


(I multiply by 21 because there are 21 possible sequences of 5 in a section of 25 questions. Sure, there are many sections that have a greater or lesser number of questions per section, but let's assume those differences cancel each other out.)

# of sections containing (at least) one particular 5-in-a-row sequence one would expect over the course of 64 LSATs = (4 sections per exam * 64 exams) * .67% = 256 * .67% = 1.7152. Round that to the nearest whole number, and we get 2, which is 1 fewer than the actual number of occurrences. Nothing shocking or scary about that.

***

Bottom line: If the answer choices spell your name, don't freak out. If the answer choices spell the word "DEAD," don't freak out. This sort of thing can, and does, happen.

Photo by unloveable / CC BY-NC-SA 2.0

LSAT Answer Keys for Every PrepTest / Exam

Below, you'll find the answer keys to every LSAT PrepTest. However, the answer keys only tell you the correct answers - LSAT PrepTests don't tell you why a particular answer choice is right or wrong.

This is unfortunate, because learning from your mistakes is the way to improve your score. Since the LSAT doesn't come with explanations, you'll need to get them separately.

On LSAT Blog, you can get PDF explanations for LSAT PrepTests by section (LG, LR, and RC):


-Logic Games explanations for the newest PrepTests
-Logic Games explanations for PrepTests 62-71
-Logic Games explanations for PrepTests 52-61
-Logic Games explanations for PrepTests 29-38
-Logic Games explanations for PrepTests 19-28

-Logical Reasoning explanations for the newest PrepTests
-Logical Reasoning explanations for PrepTests 62-71
-Logical Reasoning explanations for PrepTests 52-61
-Logical Reasoning explanations for PrepTests 44-51
-Logical Reasoning explanations for PrepTests 29-38
-Logical Reasoning explanations for PrepTests 19-28

-Reading Comprehension explanations for the newest PrepTests
-Reading Comprehension explanations for PrepTests 62-71
-Reading Comprehension explanations for PrepTests 52-61
-Reading Comprehension explanations for PrepTests 44-51
-Reading Comprehension explanations for PrepTests 29-38
-Reading Comprehension explanations for PrepTests 19-28


***

Answer Keys for LSAT PrepTests 1-10:

LSAT Blog Answer Keys PrepTests 1-10











Answer Keys for LSAT PrepTests 11-20:

LSAT Blog Answer Keys PrepTests 11-20











Answer Keys for LSAT PrepTests 21-30:

LSAT Blog Answer Keys PrepTests 21-30










Answer Keys for LSAT PrepTests 31-40:

LSAT Blog Answer Keys PrepTests 31-40










Answer Keys for LSAT PrepTests 41-50:

LSAT Blog Answer Keys PrepTests 41-50










Answer Keys for LSAT PrepTests 51-59 (and June 2007):

LSAT Blog Answer Keys PrepTests 51-59 and June 2007











Answer Keys for LSAT PrepTests 60-69:

LSAT Answer Keys PrepTests 60-69



Answer Key for LSAT PrepTest 70-74:

LSAT Answer Keys PrepTest 70-74


Answer Keys for PrepTests A, B, C, and Feb 97:

A, B, and C are in LSAC's SuperPrep book. Feb 97 is the Official LSAT PrepTest with Explanations (now out-of-print - available as LSAC's ItemWise).

LSAT Answer Keys Feb Exams



* = item removed from scoring

LG = Logic Games
LR = Logical Reasoning
RC = Reading Comprehension


Each published exam has 4 sections. I've included the answer keys for each section in the order in which they appear in the published exam.

(For example, in the published version of PrepTest 1, the 4 sections appeared in the following order: RC, LG, LR, LR. The first section of LR is Section 3 of the exam. As such, I've placed it in the 3rd column of my answer key for that exam.)

***

Also see LSAT PrepTest Raw Score Conversion Charts.


All actual LSAT content used within this work is used with the permission of Law School Admission Council, Inc., Box 2000, Newtown, PA 18940, the copyright owner. LSAC does not review or endorse specific test preparation materials or services, and inclusion of licensed LSAT content within this work does not imply the review or endorsement of LSAC. LSAT is a registered trademark of LSAC.

Easiest LSAT Curve: December | Hardest LSAT Curve: June

LSAT Blog Easiest LSAT Curve June Feb Oct DecOne of the most common questions I get from you guys new to the LSAT is: "Which LSAT's month is the easiest/hardest?"

Anyone who knows anything will tell you, "They're all the same. No month's LSAT is particularly easy or difficult."

You then ask, "But what about the curve?"

Answer: "It's not actually curved. It's equated."

If you're especially savvy, you won't be satisfied with that. You'll look at my LSAT PrepTest Raw Score Conversion Charts and calculations of what it takes to get an LSAT score of 160 or 170.

Using that data, you'll find that the December exam consistently has the easiest "curve," and the June exam consistently has the hardest.

In this blog post, I do two things:

1. include my analysis of the raw score conversion charts, which supports the claim that December exams consistently have the easiest "curve" and June exams consistently have the hardest "curve."

2. include my lengthy email conversation with the blog reader who brought this to my attention.

I should mention right off the bat that the differences we're talking about are only a point or two out of 180. Additionally, I still think that the June exam is the best for admissions purposes (see February vs. June LSAT and June vs. October LSAT.)

However, the differences covered in this blog post are consistent for the past 8 years (and in some cases, beyond that). Even an average difference of a point or two is significant.


Analyzing the Past 8 Years (aka how do you know I'm not making this up?)
First, I did a month-by-month comparison of the raw score conversion charts for the past 8 years of exams: PrepTest 37=June 2002 through PrepTest 59=Dec 2009 (present). I analyzed the June, September/October, and December exams on 5 data points.

The following is the average number of questions you could answer incorrectly (by month) and still achieve scaled scores of 160, 165, 170, 172, and 180, respectively, over the past 8 years:


LSAT Blog December Curve Comparison Averages 2002-2009






In case you can't see the image, here's that data in text form:

Jun: 24.125, 16.5, 10, 8, 1.5
S/O: 24.875, 16.5, 10.25, 8.25, 1.75
Dec: 26.25, 18.125, 11.375, 9.25, 2


(I didn't examine any data points between 173 and 179 because each exam lacked at least one of these scores. In other words, there were too many cases where there was no raw score that converted to one of those scores out of 180.)

In all cases for averaged raw score conversions over this period (for these data points), one could answer a greater number of questions incorrectly on the December exam than on either the June or the Sept/Oct exams, yet still achieve the same score out of 180.

In 4 out of 5 cases, the Sep/Oct exam was slightly "easier" than June, as well. In the other case, they were perfectly tied.

To put it another way, in 4 out of 5 cases, the June exam required the most correct answers to achieve a particular scaled score. In the other case, it was perfectly tied with Sep/Oct.


How Big Is This Trend? Does It Also Hold For The 8 Years Before That?

To determine this, I also analyzed PrepTest 11=June 1994 through PrepTest 36=December 2001 by month: June, September/October, and December on 2 data points, just to see if the general trend held true in the 8 years prior to June 02:

The following is the average number of questions you could answer incorrectly over that period (by month) and still achieve scaled scores of 160 and 170, respectively:


LSAT Blog December Curve Comparison Averages 1994-2001





In case you can't see the image, here's that data in text form:

Jun: 27.875, 12
S/O: 29.125, 12.875
Dec: 27.625, 14.125


These findings are somewhat surprising, given what I found for June 02-December 09 (above).

From 1994-2001, it was even easier, on average, to get a 170 in December than in June than from 2002-2009 (The older period had a difference of 2.125 raw score points at the 170 data point, while the more recent period only had an average difference of 1.375 for the 170 data point.)

In other words, the June exam was not only still the toughest to get a 170 on in this period, but it was even tougher to get a 170 in June over this period than in the more recent period.

I also found that the September/October exam's "easiness" was closer to December than it had been in the more recent period.

However, my most surprising finding for this period: it was actually a bit easier to get a 160 in September/October than in either June or December, a trend that certainly hasn't held true in the past 8 years.


How Do February LSAT Conversions Compare To Those of Other Months?

After all this analysis of June, Sep/Oct, and Dec, I started wondering how February exams compare. Unfortunately, no February LSATs have been released since 2000, so our sample size is both older and smaller than it otherwise would have been.

However, I did what I could. I looked specifically at the conversion charts for nearly every exam from February 1994 - December 2000. 7 February exams were released over this period. (I excluded the entire year of 1998 because that year's February exam was not released.)

I didn't compare the February exam data with current exam data because it currently takes more questions correct to get a particular scaled score (out of 180) across the board than it did in the past (data).

The following is the average number of questions you could answer incorrectly over that period (by month) and still achieve scaled scores of 160 and 170, respectively:








In case you can't see the image, here's that data in text form:

Feb: 27.166, 12.333
Jun: 27.833, 11.833
S/O: 29.5, 12.833
Dec: 27.667, 14.166


At the 160 data point, the Feb exam was the most difficult (required the most questions correct to get a 160). At the 170 data point, it was the second most difficult.

Of course, as we know from looking at the entire 8-year period from 1994-2001 period (previous section), what was true of the 160 data point was not true of the present day.

We have no way of knowing whether Feb exams have continued to be relatively difficult, of course, since they're no longer released. However, it's still something to keep in mind.
***

The following email exchange includes some off-the-cuff hypothesizing about the reasons that December exams consistently allow one to have a greater number of incorrect answers, yet still achieve the same scaled score. (The data above also raises questions about why June exams consistently require one to have a fewer number of incorrect answers to achieve the same scaled score.)

Unfortunately, we have more questions than answers as to "why."


Is it because the December tests are consistently harder and June tests are consistently easier?

Looking at the exams, it doesn't seem that way. Without a large sample size, it's difficult to say. All we can say is that difficulty of particular exams and questions is, to a certain extent, subjective.

Additionally, one would think LSAC aims to make each exam of equal difficulty to avoid too much variation in the raw score conversion charts. After all, LSAC wants to maintain the equivalency of scores from different exams.

Is it because the December/June pools of LSAT-takers are "different" in some way? Maybe.

Is it because LSAC abducted Elvis? Maybe.

Any hypothesis about it is just that - a guess.


As I've said before, statistics isn't my thing - it's much easier for me to take averages, as I did above, than to tell you the reason the numbers appear as they do - that's a whole different ball game.

I've asked LSAC to shed some light on these questions. Here's part of LSAC's response:

"The differences you describe are very small and represent the type of minor fluctuation we expect to observe."

I still think the differences are important enough to warrant this blog post.


My emails with the blog reader (Christopher) who brought this to my attention:


Christopher:
I read your posts about the LSAT "curve" (that's not really a curve) and then looked at the raw score conversion charts - it seems to me from quick analysis that the December LSAT consistently seems to be "easier."

Easier is a relative term I suppose, but let's say we look only at the upper end of the scores - ie. 170-180. It's hard to get an exact comparison since there are so many blanks in the upper ranges from year to year but it seems that consistently in a given year with the December test, you can afford to get more questions incorrect to achieve the same scaled score.

Let's say we look at 180 and 172 which are both uninterrupted (no blanks) since June 2002. Basically in every instance, you could afford to get more wrong in December than in June (granted the differential is only 1-2 points). 2005 seems to be an odd year, but for the rest if you pick a score between 172-180 where there are three data points, overwhelmingly it seems to indicate that December is more forgiving.

I guess you could make the argument that the December test is in fact "harder" and thus someone who scored 94/101 in Dec '09 would most likely score 96/101 in Jun '09 (achieving 176 on both tests) - BUT given a small chance of human error (you pick the right answer, but fill in the wrong bubble) or let's say you run out of time and leave the last question on every section blank no matter how easy or hard it may be - aren't you better off taking the December test if you're aiming for 175-180?


Me:
For the last 8 years, at least, the data supports what you suggested.

It would certainly be worthwhile to take in December if one's primary goal were to safely achieve high scores - less punishment for bubbling errors, or for any errors at all, of course. I would expect someone scoring at that level wouldn't have significant time issues, though.

There are considerations that, generally speaking, might lead one to avoid December, though. An admissions-related consideration is that Dec is rather late in the cycle to apply to a T14 school, especially for T5 schools. Of course, a 175-180 would more than eliminate any drawback of applying that late. However, if something goes wrong in December, you're basically out of luck for that cycle (for many top schools).

(You could always take in December and apply in the following fall, but most people don't plan that far ahead, and most aren't willing to wait that long.)

At the same time, though, if you're capable of getting 175-180 in Dec, you can probably also get it in Feb, June, or Sept/Oct. At the same time, better safe than sorry, though.


Christopher:
All the points you mention are definitely true if you have other concerns than just scoring high - i.e. admissions/timing concerns. My question was more just specifically if your intent was to try and get as high a score as possible (and timing was less of an issue).

Thinking more about this - I wonder if it's due to the fact that more people take the test in December but the ratio of high scorers to low scorers doesn't scale equivalently at the same rate.

i.e. if the ratios were the same, and when the number of test takers doubled it was as if everyone grew a twin with the exact same scoring ability, then it would make no difference which month you took the test in.

However, conversely (and what the data would seem to suggest, although you wouldn't be able to prove it) - maybe when twice as many people take the test in December, there's a disproportionately increased number of "average test-takers", but less (as a percentage of the total) "high-scoring" test takers. Therefore if you were a "high scorer" it would be in your benefit to take December because there are a smaller percentage of people who are at your ability or better.

This latter thought is just a hypothesis - not sure how valid it is given that I did a quick glance at scores in the ranges around 130 and it still seems that "Dec" is easier.


Me:
If you look at the data from LSAC on the number of test-takers for each exam, you'll find that the September/October exam is the most popular, by far.

I hypothesize that there are fewer strong test-takers in the December pool because it's late in the cycle. Perhaps a lot of the weaker test-takers who take, or planned to take, the September/October exam retake it in December. Generally, the stronger test-takers from Sept/Oct wouldn't need to retake because they did fine.


Christopher:
I'm inclined to agree with your hypothesis about December test-takers. I think it's a combination of what you mentioned + the fact that (for college kids) Sept allows for summer prep whereas Dec doesn't. Also, Dec runs into the problem of conflicting with exam study.

Additionally, under our current tough economic conditions, I would guess a lot of people may not think about going to law/grad school until they realize that finding a job is harder than it seems. For May graduates they may not realize this until the summer winds down and the end of the year approaches and suddenly they find themselves in a position where they want to take the LSAT, GMAT etc. "just to leave their options" open. Once again though, unfortunately there's really no way to prove this, though.

Photo by bensonkua / CC BY-SA 2.0

Creating the LSAT's Raw Score Conversion Chart (aka, the Curve)

LSAT Blog Curve SignThis post is Part 5 of the "The LSAT Curve" series. The series starts with The LSAT Curve | Test-Equating at LSAC.

Creating the LSAT's Raw Score Conversion Chart (aka, the Curve)
Let's suppose that, on a given exam, the 170-scorers got 12 questions wrong altogether on the 4 scored sections.

That's an average of 3 questions wrong per scored section. Let's assume they got an average of 3 questions wrong on the games section.

However, let's say that a subset of those 170-scorers all took the same experimental Logic Games section. What if these test-takers got an average of 5 questions wrong on that Logic Games section?

Therefore, we can say that this experimental Logic Games section is harder for 170-scorers than the scored LG section. As a result, this section deserves a slightly more generous "curve" than the scored section of LG does - for 170-scorers.


Let's suppose the average 150-scorer got 40 questions wrong altogether on the 4 scored sections of this very same exam.

That's an average of 10 questions wrong per scored section. Let's assume they got an average of 10 questions wrong on the games section.

What if a subset of those 150-scorers took the very same experimental Logic Games section and scored an average 10 questions wrong on it?

Therefore, we can say that this experimental LG section is no harder for 150-scorers than the scored LG section was. As a result, this section doesn't a more generous "curve" than the scored section of LG does - for 150-scorers.

(Of course, all of this is only about one Logic Games experimental section. Perhaps a different group of 170-scorers took an experimental Logical Reasoning section that was easier for them than their scored Logical Reasoning sections. Perhaps a different group of 150-scorers took this experimental LR section and found it more difficult than their scored LR sections. If the experimental Logical Reasoning section were placed on an exam with the experimental Logic Games section mentioned earlier, the differences might cancel each other out.)


The fact remains that a given exam might be of varying levels of difficulty for test-takers at different levels.

If a particular test is very difficult for 170-scorers, then the "curve" for them will be very generous (meaning someone who "deserves" a 170 won't have to answer as many questions correctly to get a 170 as they would have if the exam weren't as difficult).

If a particular test is of normal difficulty for 150-scorers, then the curve can just be normal for them, meaning it'll require the typical amount of questions correct in order to get a 150. People whose "true scores" are at 150 won't need any messing with the "curve" to get the 150 they deserve.

A nice summary on "true scores" and test-equating from LSAC:
Testing organizations typically disclose test forms after they have been administered to large test-taker populations. Therefore, several test forms must be developed annually to be as similar as possible to one another in terms of statistical and content attributes. Although a great deal of effort is placed on assembling comparable tests, forms will tend to vary somewhat in terms of their statistical characteristics. Hence, scores must be transformed in order to enable direct comparisons across forms. The process by which scores are adjusted so as to make them comparable to each other is referred to as equating. The Law School Admission Council (LSAC) employs item response theory (IRT) true-score equating to equate the LSAT.
Source: Assessing the Effect of Multidimensionality on LSAT Equating for Subgroups of Test Takers (Executive Summary)


***


If any of this "LSAT curve stuff" seems confusing or unfair, don't waste time worrying about it.

The bottom line is that it doesn't matter which month you take the exam, and it doesn't matter how easy or difficult your particular exam is. The raw score conversion chart, based on LSAC's statistical data, addresses all of those issues and makes sure that everything's equal in the end.

Just focus on answering as many questions correctly as possible, and let LSAC take care of the rest.

***
Want to start at the beginning? Begin with The LSAT Curve | Test-Equating at LSAC.

Also, in case you missed them:

-I published the Raw Score Conversion Charts for every LSAT PrepTest ever released in one big spreadsheet.

-I created graphs and charts demonstrating changes over time in how many questions you can miss on the LSAT and still get a 170 or 160, respectively.

Photo by revolute / CC BY-NC-SA 2.0

LSAT Question Difficulty Ratings

LSAT Blog Difficulty Ratings SignThis post is Part 4 of the "The LSAT Curve" series. The series starts with The LSAT Curve | Test-Equating at LSAC.

Deciding which questions are "difficult"

Difficulty is all relative, right?

One way to make a question difficult is to include less-obvious conditional indicator words (using "if" and "then" kinda give the game away).

Another way is to make the question about a boring topic that few test-takers know about, like morality, aestheticism, or brown dwarf stars.

The issue, however, is that it's not always clear how difficult a question actually is in practice. The experimental section allows LSAC to determine how tens of thousands of test-takers perform on its latest questions.

Let's assume that LSAC gave a particular Logical Reasoning section to a bunch of test-takers on a given administration of the LSAT.

If only a small percentage of test-takers get question #17 right, and these are primarily the same test-takers who scored 170+ on the 4 sections of the exam that counted, then LSAC can safely assume that this is a question with a "Difficulty Rating" of 5.

If a large percentage of test-takers get question #3 right, and it's mainly the sub-140-scorers who get it wrong, then LSAC can safely assume this is a question with a "Difficulty Rating" of 1.

However, if a small percentage of test-takers get question #5 right, and these test-takers are mainly the sub-140-scorers, then LSAC can safely assume that something's very wrong with this question. This question is unlikely to make it into any part of the scored exam, at least, not in its current form. This question just isn't doing its job.

Similarly, if a large percentage of test-takers get question #20 right, but the 170+-scorers aren't getting it right, then something's probably wrong with this question. This question isn't doing its job either.

Cases where questions aren't doing their job are probably rare. LSAC's people generally know what they're doing, but it's worth thinking about the fact that LSAC trusts the opinions of its top scorers. Since they get the greatest number of questions right, most of them probably know what they're doing when it comes to the LSAT (or they're just really lucky).

***

Next week, we move on to Part 5: Creating the LSAT's Raw Score Conversion Chart (aka, the Curve)

Want to start at the beginning? Begin with The LSAT Curve | Test-Equating at LSAC.

Photo by sea-turtle / CC BY-NC-ND 2.0

The Experimental Section and Difficulty of LSAT Questions

LSAT Blog Experimental SectionThis post is Part 3 of the "The LSAT Curve" series. The series starts with The LSAT Curve | Test-Equating at LSAC.

"The LSAT is equated so that a test score obtained in the current year is comparable to scores obtained in previous years." - LSAC (Executive Summary)

Test-equating requires pre-testing.

After LSAC's elves write individual LSAT questions, they compile these questions into various 35-minute sections. If you've taken the LSAT before, you've already completed one of these sections as the hated "experimental section." In LSAC language, this is the "pretest" section where new questions are tested to:

provide test development staff with statistical information about each question, and with information about possibly ambiguous or misleading information in the question or in one or more of the answer choices. If problems are identified, either the question is discarded or it is revised and pretested again. All questions that pass the quality standards of a pretest administration are placed in the LSAT test question item bank. New test sections are assembled by selecting questions from this LSAT item bank. Each fully assembled test section is administered on one or more separate occasions for the purpose of pre-equating the new form.

Pre-equating is a statistical method used to adjust for minor fluctuations in the difficulty of different test forms so that a test taker is neither advantaged nor disadvantaged by the particular form that is given. Following each pre-equating administration, the statistical information about each question is reviewed to assure that the data support that the question is of appropriate difficulty, discriminates higher ability test takers from lower ability test takers, is unambiguous, and has a single best answer. When the test is given at a regular LSAT administration, but before final scoring is completed, statistical analysis is conducted one last time. Each question is evaluated using the same criteria that were applied following the pretesting and pre-equating administrations. If a problem is found, the question is eliminated from the test before final scoring and reporting are accomplished.
(Source: Page 2 of Policies and Procedures Governing Challenges to Law School Admission Test Questions. I divided this excerpt into two paragraphs. Just like some Reading Comp passages, it lacked paragraph breaks.)

Most of us know LSAC pretests questions in order to avoid using flawed questions that will later be withdrawn from scoring. This is what they mean by "is unambiguous and has a single best answer." (second para, second sentence)

However, the other parts of that particular sentence are worth noting.

Questions have various levels of difficulty
- LSAC is careful to make sure "the question is of appropriate difficulty" and "discriminates higher ability test takers from lower ability test takers."

LSAC wants to have a certain number of super-easy, easy, medium, difficult, and super-difficult questions on each exam (as part of the test-equating process).

It's not enough to just make a bunch of super-difficult questions and say whoever answers them right deserves to get into Harvard Law School.

How would you distinguish the students who got those questions wrong from each other?

For law schools, it's not enough to separate the 175+-scorers from everyone else. You also have to separate the 170-scorers from the 165-scorers from the 160-scorers, etc.

If you make every single question very difficult, some test-takers will get them all right, but most will just end up guessing. Obviously, LSAC won't know whether a test-taker guessed or not on a given question. However, if most test-takers end up guessing, the LSAT will no longer be a good predictor of law school performance (which is what it's supposed to be, after all), and the LSAT won't be able to adequately distinguish a good test-taker from a decent one from a bad one.

By including questions of various levels of difficulty, the LSAT meaningfully separates test-takers into multiple ability levels - not just 175+ and "everyone else."

For insight into how LSAC views the difficulty of various questions, check out the SuperPrep book's explanations (which are written by LSAC). After each question, you'll see a "Difficulty Rating" of anywhere from 1 to 5.

***

Next week, we move on to Part 4: LSAT Question Difficulty Ratings

Want to start at the beginning? Begin with The LSAT Curve | Test-Equating at LSAC.

Photo by practical owl / CC BY-NC 2.0

LSAT PrepTest Raw Score Conversion Charts

LSAT Blog Raw Score
In this blog post, I include the LSAT PrepTest raw score conversion charts for every released LSAT PrepTest. The below pictures show the minimum number of credited responses (correctly-answered questions) that will allow you to get a particular score.

At the end of this blog post, I include links to some analysis of the below data.

First, some notes on the LSAT PrepTest raw score conversion charts:

"__*" means no test-taker received that score on that exam.

Here's a big list of released LSAT PrepTests.

"SP" stands for SuperPrepOfficial (Feb 97) is the Official LSAT PrepTest with Explanations, and Free (June 07) is a free PDF on LSAC's website.

***

You can view this information as a series of picture files. One click to enlarge each picture, and you're there.

The following pictures cover raw score conversions for LSAT scores from ~140-180.


PrepTests A, B, C, Feb 97, and 1-17:
LSAT Blog Raw Score Conversion Chart 1










PrepTests 18-36:
LSAT Blog Raw Score Conversion Chart 2









PrepTests 37-54 (and June 07):
LSAT Blog Raw Score Conversion Chart 3









PrepTests 55-69:
LSAT Blog Raw Score Conversion Chart 4














PrepTest 70-74:
LSAT Blog Raw Score Conversion Chart 5














You probably won't score anywhere close to 140 once you start doing full-length PrepTests towards the end of your prep (that's when people tend to start thinking about raw score conversions). If you're scoring below 140, or if you're just plain interested, here are the raw score conversion charts for LSAT scores below 140:


PrepTests A, B, C, Feb 97, and 1-17:
LSAT Blog Raw Score Conversion Chart 5





PrepTests 18-36:
LSAT Blog Raw Score Conversion Chart 6





PrepTests 37-54 (and June 07):
LSAT Blog Raw Score Conversion Chart 7





PrepTests 55-69:
LSAT Blog Raw Score Conversion Chart 8








PrepTest 70-74:
LSAT Blog Raw Score Conversion Chart 9









***
To learn about how raw scores and score conversions work, see the LSAT Curve series starting with: The LSAT Curve | Test-Equating at LSAC.

Also see: LSAT Graph / Spreadsheet: How Many Questions to Score 170 and 160 and Easiest LSAT Curve: December | Hardest LSAT Curve: June

Photo by viewmaker

All actual LSAT content used within this work is used with the permission of Law School Admission Council, Inc., Box 2000, Newtown, PA 18940, the copyright owner. LSAC does not review or endorse specific test preparation materials or services, and inclusion of licensed LSAT content within this work does not imply the review or endorsement of LSAC. LSAT is a registered trademark of LSAC.

LSAT Graph / Spreadsheet: How Many Questions to Score 170 / 160

LSAT Blog Fancy Line GraphAfter I compile a lot of data, I like to analyze it.

I just published the Raw Score Conversion Charts for every released LSAT PrepTest, so I decided to create a graph illustrating the maximum number of questions you can miss on every LSAT PrepTest and still get a 170. (I also made one about getting a 160 - scroll to the end for info about that one)

(Click image to enlarge, and see details and analysis below.)

LSAT Blog Line Graph Max Number Questions Incorrect to Score 170 from PT1-PT59















Details

This graph covers all released LSAT PrepTests to date (PrepTest 1 - PrepTest 59). It includes the SuperPrep exams, the Feb 97 LSAT, and the June 07 LSAT (PDF), and it places all exams in chronological order from left-to-right on the x-axis.

(In the data lists below, "SP" stands for SuperPrep, Official (Feb 97) is the Official LSAT PrepTest with Explanations, and Free (June 07) is a free PDF on LSAC's website.)

There wasn't enough space to include the data for the x-axis (the horizontal line) on the graph itself. However, I've uploaded the data as 2 separate images below (click images to enlarge) so you can see the # of questions you can miss and still get a 170 for specific PrepTests.

You might wonder why I look at the number of questions you can get wrong and still get a 170. Why don't I look at the number of questions you need to get right?
Because not all exams have the same number of questions. The # of questions of exams has ranged from 99 to 102. For this reason, it makes sense to look at the difference between the number of questions required for a 170 score and the total number of questions on a given exam.

Data Images
(Click images to enlarge.)

PrepTests 1-36:




PrepTests 37-59:




Analysis
As you can see, there's been a significant downward trend in the number of questions you can get wrong and still get 170. That green line on the graph illustrates this decreasing average.

Basically, you can't miss as many questions today as you used to and still get a 170. The average has moved from -13 (13 incorrect) to -11. A change of 2 raw score points (getting 2 fewer questions wrong) might not seem very significant, but it's significant when the historical range is only 8-16.

The cushy days of getting 15 or 16 questions incorrect but still walking away with a 170 score are long gone.


What does this all mean for you?

I only posted this graph and data to satisfy your curiosity and give you a sense of the trend.

In short, the raw score conversion charts of older exams don't adequately represent those of more recent exams. There's been a shift. You'll need to get more questions right than you would've in the past in order to get a 170.


The bottom line:

Your goal should be the same as its always been - get as many questions correct as possible.


***

I made a similar graph depicting how many questions you can get wrong and still get a 160. As you might expect, there's a downward trend for that one also.

(Click image to enlarge.)

LSAT Blog Line Graph Max Number Questions Incorrect to Score 160 from PT1-PT59

The range for this graph is 22-31 questions incorrect, with the -31 occurring some time ago and the -22 being much more recent. The average # of questions you can get wrong and still get a 160 has dropped from 29 to 25.

My analysis for this is pretty much the same as for the 170 graph.


Data Images
(Click images to enlarge.)


PrepTests 1-36:




PrepTests 37-59:




Photo by blprnt_van

The LSAT Curve | Test-Equating at LSAC

LSAT Bell CurveThis post is Part 1 of the "The LSAT Curve" series of blog posts. Click here for links to each part of the series.

There's a lot of confusion about the LSAT's curve. The LSAT is not actually scored to a curve, but most test-takers think it is.

This series is my effort to explain LSAC's process of test-equating, raw score conversions, percentiles, and why the test isn't actually curved. Because I dislike statistics (and because most of you probably do also), this blog post involves very little math. However, it might involve some thinking.

You've been warned.

LSAC's Associate Director of Psychometric Research, Lynda Reese, recently wrote the following to one test-taker who asked about the curve (I've added the links):
[T]he LSAT is not graded to a curve...Rather, for every form of the LSAT, a statistical process called test equating is carried out to adjust for minor differences in difficulty between different forms of the test. Specifically, the item response theory (IRT) true score equating method is applied to convert raw scores (the number correct) for each administration to a common 120 to 180 scale. A detailed description of this methodology can be found in...Applications of Item Response Theory to Practical Testing Problems...The equating process assures that a particular LSAT scaled score reflects the same level of ability regardless of the ability level of others who tested on the same day or any slight differences in difficulty between different forms of the test. That is, the equating process assures that LSAT scores are comparable, regardless of the administration at which they are earned.
I'm not a psychometrics expert, but I decided to go ahead and learn more about how LSAC constructs the exam and ensures different PrepTests are of relatively equal difficulty.

I looked up the book Ms. Reese referenced (and believe me, it wasn't exactly a walk in the park).

The following is my understanding of how LSAC creates each LSAT and goes about the test-equating process. Feel free to leave questions and comments, especially if you have a decent understanding of statistics, psychometrics, etc. LSAC's also welcome to leave comments. They haven't commented on the blog yet, but the door's always open.

If you're new to the LSAT, see the LSAT FAQ for more on the basics before getting into all the details.

If you're not new to the LSAT, read on, starting with these definitions of basic terms and concepts:

Conversion Chart: Chart at the end of each PrepTest that helps you translate a raw score into a score out of 180

Percentile: The percentage of test-takers whose scores fall below yours. If you score in the 50th percentile, you scored higher than half of all test-takers. If you score in the 97th percentile, you scored higher than 97% of all test-takers.

PrepTest: Previously administered and released LSAT exam

Psychometrics: The study of psychological measurements. As far as we're concerned, it's the "science" of standardized testing.

Raw Score: The number of questions you answer correctly on the LSAT

Test-equating/Pre-equating: "a statistical method used to adjust for minor fluctuations in the difficulty of different test forms so that a test taker is neither advantaged nor disadvantaged by the particular form that is given" - LSAC (PDF).

Test form: A particular LSAT exam

Scores have to be meaningful and consistent
The LSAT is a standardized exam. This means that a 160 on the Feb 2010 LSAT should be equivalent to a 160 on the June 2010 LSAT, which should be equivalent to a 160 on the October 2010 LSAT, etc. Law schools can't be bothered to look at particular Logic Games, Logical Reasoning, and Reading Comprehension on various exams to see if students with identical scores actually performed at different levels. They can't bother to look at test-takers' raw scores. That's why they have equated numerical scores out of 180, after all.

Administering the same questions over and over wouldn't work
One theoretical (and stupid) way to ensure that all scores were equal would be to create only one LSAT PrepTest and administer it over and over. This would ensure that all test-takers were treated equally and that the "raw score conversions" were always fair. However, this ignores the fact that test-takers would share information with each other.

People who took the February 2010 LSAT would give/sell info about questions that appeared to test-takers who took it in June 2010, etc. Under such a system, the later one took the exam, the more inflated his/her score would be, on average. Thus, LSAC can't just keep giving the exact same questions exam after exam.

For this reason, LSAC needs to create different exams for each released test administration and make them of relatively equal difficulty. A 160 on one LSAT (aka "test form") needs to be equivalent to a 160 on any other LSAT.

***

Read on for Part 2: Why the LSAT Isn't Scored on a Curve: Myth and Fact

Photo by hname / CC BY 2.0

Why the LSAT Isn't Scored on a Curve: Myth and Fact

LSAT Blog Why Not CurveThis post is Part 2 of the "The LSAT Curve" series. The series starts with The LSAT Curve | Test-Equating at LSAC.

Myth: The LSAT is curved
solely on how everyone does that day.

A lot of test-takers believe that the LSAT is "curved", meaning that you should try to figure out which month's exam will have the greatest percentage of low-scorers and take it with them.

The idea goes:

-If you take the same LSAT as a lot of lower-scorers, you'll look better than you would have otherwise (by comparison) and get a higher score as a result.

-For this reason, you should sabotage your fellow test-takers. Lace their food with laxatives, steal their prep books in the library, etc. Anything to get a leg up on them.

Unfortunately for the dishonest and sneaky among you, LSAC can't just compare all test-takers who took the February 2010 LSAT with each other and have that be it.

Why?

Perhaps February test-takers don't adequately represent LSAT-takers as a whole.


Fact: Different pools of test-takers might perform differently.

Let's assume for a moment that, on average, February test-takers answer fewer questions correctly than the theoretical "average test-taker" would on any other exam. If this were true, the average test-taker would get a higher score than he/she deserves by taking the LSAT in February (all other things being equal).

LSAC can't allow this to happen. If it did, then a 160 on the February 2010 LSAT would be easier for the average test-taker to achieve than a 160 on the June 2010 LSAT, and the 160s would not, therefore, be "equal." Whether the average test-taker intentionally (and foolishly) took the February LSAT with the purpose of being compared to a lower-scoring pool is less important than the results. LSAT scores would not mean as much. They wouldn't be as reliable because one would have to consider the context in which the exam was taken.

For this reason, and due to the fact that there are minor differences in difficulty between exams, scores are not simply curved based on each "test form" in isolation.

The magic solution?

Test-equating.

***

Read on for Part 3: The Experimental Section and Difficulty of LSAT Questions

Photo by petereed / CC BY-NC 2.0

LSAT Logic Games Classification List

LSAT Blog Logic Games Classification ListThe following classification covers Logic Games in LSAT PrepTests 39-51 (and June 2007).

(These exams are all available on Amazon.com. In my LSAT study schedules, I recommend saving most of the newer PrepTests for full-length timed practice. In order to avoid "corrupting" those exams, I suggest you avoid looking at this classification for any PrepTest that you plan to take under timed conditions until you've completed that test.)

If your study materials refer to PrepTests by their month and year, rather than by PrepTest number, please see LSAT PrepTests and Dates Administered.

In this blog post, I first group each Logic Game by its classification. At the end of all that, I also classify all Logic Games but place them in order by PrepTest # and date.

(You can also find LSAT Logic Games categorizations for LSAT PrepTests 19-38 and and LSAT PrepTests 52-present.)

I've placed an asterisk (*) next to some games that are especially difficult. I've placed a plus (+) next to some that are especially easy. Of course, difficulty is subjective, so please leave comments!

Logic Games by Classification:

Pure Sequencing
PrepTest 43, Game 2
PrepTest 48, Game 2
PrepTest 50, Game 4 *
PrepTest 51, Game 2
PrepTest 51, Game 4


Basic Linear
PrepTest 40, Game 1
PrepTest 41, Game 1 +
PrepTest 42, Game 2 +
PrepTest 43, Game 1
PrepTest 44, Game 1 +
PrepTest 45, Game 1
PrepTest 46, Game 1 +
PrepTest 46, Game 3
PrepTest 47, Game 1
PrepTest 49, Game 1 *
PrepTest 49, Game 4
PrepTest 50, Game 1
PrepTest 50, Game 3 +
June 2007 LSAT, Game 1
June 2007 LSAT, Game 3


Advanced Linear (aka Combination of Linear and Grouping: Matching)
PrepTest 39, Game 1
PrepTest 39, Game 3
PrepTest 41, Game 2
PrepTest 42, Game 3
PrepTest 43, Game 3

PrepTest 44, Game 3

PrepTest 46, Game 2
PrepTest 47, Game 4
PrepTest 48, Game 4
PrepTest 51, Game 3


Grouping: In-and-Out
PrepTest 39, Game 4
PrepTest 40, Game 4
PrepTest 42, Game 1
PrepTest 45, Game 3
PrepTest 47, Game 2
PrepTest 48, Game 1
PrepTest 49, Game 3
PrepTest 50, Game 2


Grouping: Splitting
PrepTest 41, Game 3


Grouping: Matching
PrepTest 39, Game 2 *
PrepTest 42, Game 4 *
PrepTest 43, Game 4
PrepTest 44, Game 2
PrepTest 44, Game 4 *
PrepTest 45, Game 4
PrepTest 46, Game 4
PrepTest 47, Game 3
PrepTest 48, Game 3
PrepTest 49, Game 2
PrepTest 51, Game 1
June 2007 LSAT, Game 4


Circular Linearity
PrepTest 41, Game 4 *


Grouping: Mapping
PrepTest 40, Game 3 *


Linear / Grouping: In-and-Out
PrepTest 40, Game 2
PrepTest 45, Game 2
June 2007 LSAT, Game 2

***

Logic Games by PrepTest # and Date:

PrepTest 39 (December 2002 LSAT)
Game 1 - Advanced Linear
Game 2 - Grouping: Matching *
Game 3 - Advanced Linear
Game 4 - Grouping: In-and-Out

PrepTest 40 (June 2003 LSAT)
Game 1 - Basic Linear -
Game 2 - Linear / Grouping: In-and-Out
Game 3 - Grouping: Mapping *
Game 4 - Grouping: In-and-Out

PrepTest 41 (October 2003 LSAT)
Game 1 - Basic Linear +
Game 2 - Advanced Linear
Game 3 - Grouping: Splitting
Game 4 - Circular Linearity *

PrepTest 42 (December 2003 LSAT)
Game 1 - Grouping: In-and-Out
Game 2 - Basic Linear +
Game 3 - Advanced Linear
Game 4 - Grouping: Matching *

PrepTest 43 (June 2004 LSAT)
Game 1 - Basic Linear
Game 2 - Pure Sequencing
Game 3 - Advanced Linear
Game 4 - Grouping: Matching

PrepTest 44 (October 2004 LSAT)
Game 1 - Basic Linear +
Game 2 - Grouping: Matching
Game 3 - Advanced Linear
Game 4 - Grouping: Matching *

PrepTest 45 (December 2004 LSAT)
Game 1 - Basic Linear
Game 2 - Linear / Grouping: In-and-Out
Game 3 - Grouping: In-and-Out
Game 4 - Grouping: Matching

PrepTest 46 (June 2005 LSAT)
Game 1 - Basic Linear +
Game 2 - Advanced Linear
Game 3 - Basic Linear
Game 4 - Grouping: Matching

PrepTest 47 (October 2005 LSAT)
Game 1 - Basic Linear
Game 2 - Grouping: In-and-Out
Game 3 - Grouping: Matching
Game 4 - Advanced Linear

PrepTest 48 (December 2005 LSAT)
Game 1 - Grouping: In-and-Out
Game 2 - Pure Sequencing
Game 3 - Grouping: Matching
Game 4 - Advanced Linear

PrepTest 49 (June 2006 LSAT)
Game 1 - Basic Linear *
Game 2 - Grouping: Matching
Game 3 - Grouping: In-and-Out
Game 4 - Basic Linear

PrepTest 50 (September 2006 LSAT)
Game 1 - Basic Linear
Game 2 - Grouping: In-and-Out
Game 3 - Basic Linear +
Game 4 - Pure Sequencing *

PrepTest 51 (December 2006 LSAT)
Game 1 - Grouping: Matching
Game 2 - Pure Sequencing
Game 3 - Advanced Linear
Game 4 - Pure Sequencing

June 2007 LSAT (unnumbered - free LSAT PrepTest - PDF)
Game 1 - Basic Linear
Game 2 - Linear / Grouping: In-and-Out
Game 3 - Basic Linear
Game 4 - Grouping: Matching

Photo by chiotsrun