*OKbridge

A simplified introduction to the "Lehman Rating" System for OKbridge

© Bradley Lehman, 1998

This article is an analogy to illustrate how "Lehman ratings" are calculated. It is not intended to replace the more detailed versions; it is simply here for those who want a quick understanding without wading through mathematical details.


What is the Lehman rating system designed to do?

The primary purpose is to show you a reasonable measurement of your own individual performance, changing from week to week: "Am I playing better than before, or not?" This system is intended to be fairer and more accurate than simply going by win/loss percentages, because it takes into account who your partners and opponents were.

Another purpose is to help provide a "reduced-fear" environment for all players: no one should worry that playing with an especially strong or weak partner or opponents will hurt (or unfairly help) your own average. The system is supposed to encourage pick-up partnerships and a willingness to play against any opponents. (Attitudes such as "Oh, I see he has a low Lehman rating, I'd better not partner him or he'll take my rating down!" or "That great partner over there with a high rating is going to help mine shoot up!" or "Those opponents are too good for me, they'll take my rating down!" or "Those opponents are easy, let's clobber them and raise our ratings!" are all wrong interpretations, from misunderstanding the system. A "scratch" scoring system (direct averages of table results) encourages those attitudes. The Lehman system, as an alternative to scratch scoring, is an attempt to overcome those attitudes, an attempt to make a tolerant environment for everyone.)

Some players further treat the reported rating as a rough predictive indicator of an unknown player's OKbridge skills. That is, they use it to guess what level of competence in bidding or play to expect from that person as opponent or partner, when planning their own strategy at the table. Such an interpretation (at one's own risk) is not a direct measurement of the system, although there may be some general correlation between a level of skill and a numeric rating. Keep in mind that the system is designed more to measure you against your own earlier performance than to measure it against someone else's. Any further assumptions are just as hazardous as assumptions that players who use certain bidding conventions (or refuse to) are automatically a certain caliber of player. A player's rating does not indicate how well you will get along with that person! And a high or low rating does not necessarily indicate a strong or weak player, either. For example, a player who did especially well or poorly the first few weeks of playing OKbridge may be overrated or underrated for many weeks after that while his/her rating moves slowly toward a "true" level.

The system also does not imply one's expected performance levels in other types of bridge. It simply reflects how well one has done in the OKbridge environment.

What sends your own Lehman rating up or down?

Playing good bridge or bad bridge, your own contribution at the table. Beyond technique, good bridge includes partnership harmony and cooperation: choosing plays, bids, and overall strategy that bring out your partner's best effort. And good bridge in an individual environment such as OKbridge includes adapting quickly and easily to a variety of partners (and opponents). The Lehman rating system attempts to measure this. If you play well on these terms, in this environment, your rating goes up; if you play badly, it goes down. One of the quickest ways to send your rating down is to be in partnerships where you and your partner do not trust one another: where one or both play "solo bridge," taking actions which do not respect the other partner's contributions.

Is the rating system successful or meaningful?

The success or failure of this system is a matter of personal opinion: some players think it is excellent and useful, while others dismiss it as meaningless, or misleading, or "in need of improvement," or "obviously flawed." And many players simply don't understand it, or they interpret it in ways that are not intended. The best I can do is to explain how it works, so that players may then draw their own conclusions from a fair understanding of its principles. I also have given some "suggestions for improvement" as part of the more detailed description of the system.


An analogy to illustrate how ratings are calculated: earning "smolians" which "sponsors" have bet on you

For the sake of this explanation of the calculation process, assume there is a unit of money called a "smolian" (which could be called anything), and fictitious sponsors gambling on the outcome of each board. A person's Lehman rating (in this scenario) is the amount which a sponsor is willing to risk on each board, betting on that player to earn back the same number of smolians or better. It is the player's average smolian income per board as demonstrated in earlier OKbridge play. (It is a weighted average, where recent boards always count more than older boards.)

How are smolians earned through play?

The process of earning smolians (and thereby calculating new Lehman ratings) is as follows:

All scoring occurs at the end of the week. The computer has a record of the table result (scored in the usual manner as either a matchpoint percentage or IMPs), ranked in the field of everyone who played that board. At every table it also knows who each of the four players were. It looks up each of those players' most recently calculated Lehman rating, i.e. from the end of last week's cycle. If a player has no rating as of last week, i.e. that player is new to OKbridge, 50 is assigned for that player.

For each board, a sponsor of each pair creates a table prize by investing the number of smolians equal to each player's rating. The sponsor offers a chance for each player to win back his/her own average number of smolians, putting those smolians into the prize so they will be available. The prize is the total of all four piles. (Another way of thinking of this: essentially, the sponsor is betting on each individual person's relative chance of winning, based on past record. Being a prudent sponsor, it bets more on the players whose past record shows that they usually win a large share of the smolians, because it thinks it is more likely that those players will win on any given board. )

Now the computer takes the actual score from play: the matchpoint percentage which each of the two partnerships won. (If the scoring was IMPs, it uses a conversion scale to estimate a reasonable percentage from 0 to 100. Any IMP score that is 7.5 one way or the other is counted as 100% for that partnership. Current formula: % = IMP score * 20/3 + 50, to a maximum of 100% and minimum of 0%.) The computer divides the prize for that table into two piles of smolians: one pile for N/S, the other for E/W, according to the percentage of the prize that they won. (For example, if E/W had a 53.25 matchpoint score on the board, they get 53.25% of that pile of smolians that the sponsors put up.)

Now within each partnership the computer must divide that partnership's pile fairly. It has no way to determine which of the two players in the partnership did plays or bids which affected the size of the pile (for better or for worse). Therefore, it gives each partner a share of the smolians which will keep those two players' ratings in constant proportion with each other for that board. That is, it divides the partnership's pile in the same proportion that the sponsor invested in each of these two players. (For example, if 50 smolians were put in for North and 60 smolians for South, North gets back 5/11ths of the N/S prize pile, and South gets the other 6/11ths.)

After scoring every board for the week, the computer totals all the smolians you've won during the week, and compares that against the amount the sponsors bet on you. If you earned some extra smolians for your sponsors, your rating will go up; if you lost money, your rating will go down. This is the same as saying: if you played better than the sponsors expected you to (earned more than the sponsors bet on you), evidently improving your skill at earning smolians, your rating goes up, and the sponsors will be willing to bet more on you next week.

In calculating your new rating, the computer counts the current week's boards twice as heavily as boards you played ten weeks ago. (Old boards lose 6.7% of their importance for each week, while current boards are counted at full strength; after ten weeks, a board's importance has decayed to half strength.) That is, the sponsors are most interested in your most recent display of skill. If you are improving, your rating will go up more quickly than it would if all boards counted equally. If you had a bad week, your rating will go down more quickly, but then in ten weeks those bad boards won't hurt you much anymore, anyway.

IMP smolians and matchpoint smolians are handled completely separately, for all totals and records. They are as different as the currencies of different countries, and similar only to the extent that they use a similar type of measurement scale and similar methods of calculation. Your two Lehman ratings (one for matchpoints, one for IMPS) are only very roughly comparable to one another: there is certainly some correlation through your general set of bridge skills, but the two scoring systems are different types of bridge.

Some observations from this analogy

If one pair at a table appears to be stronger than the other, according to the players' ratings, note that its sponsor has contributed more smolians to the prize than the rival sponsor: it has a better chance of seeing its pair win, so it has to pay extra for the favorable odds. Or, to state it from another angle: with the way the sponsors lay out their money, the stronger pair must beat the weaker pair by some definite point spread if everyone is to break even.

As with a stock market, your sponsors don't care so much about your actual price (your rating) as about watching your price's change from cycle to cycle. You make them happy by earning more smolians than they invest in you, and the happier you make them, the more your price goes up.

If you play a lot of OKbridge, your Lehman rating will move more slowly than if you play only infrequently. Any particular board won't affect your rating very much. This is like saying that if the sponsors know you well, they have firmer habits about how much to invest in you.

If you play no boards at all during the week, your Lehman rating stays constant for the week. When you resume playing, your new boards will count more heavily than the old ones, and your rating will move quickly. This is like saying that the sponsors are less sure about you if they haven't seen you for a while, and they form a fresher opinion about how much to invest in you.


"But what about the scores I see on the screen after playing each board? They showed me that I was playing well, yet my Lehman rating went down! I don't understand why!"

As with all OKbridge scoring, the result of each board is calculated at the end of each week, after the board has been played many times. The score you see immediately after playing a board is merely an estimate of your final matchpoint percentage or IMP score, depending on how well you've done among people who have played that board so far. This immediate score you see has nothing to do with the way your Lehman rating is calculated. It suggests only in general how well you are doing in "scratch" scoring for that board, so far. It does not show any of the adjustments for partner or opponents, does not say anything about the smolians you will earn for the board, nor is it the final IMP or matchpoint score you will receive on the board.


"But my scratch table scores for the week show that I did especially well, yet my Lehman rating went down some!"

"Doing well" in a scratch score is not the same as "doing well" with the smolians, and it's not a bug in the system programming. Sure, you may have outscored your opponents at the table. But the sponsors put a nice stack of smolians on the probability that you would not only beat those opponents, but beat them even more severely than you did. If you didn't beat the point spread they had on you, they lost money and you disappointed them. Sorry! Either you played below your own standard, or your opponents played better than theirs (while still losing to you). Of course, in another sense, you still apparently played better bridge than your opponents did, because you outscored them; that might be some consolation here.

Some people are pleased by doing better than others in some measurable way, and that's fine. Other people (your opponents, in this case) are pleased by doing better than was expected of them in some measurable way: evidence of self-improvement or good fortune. You may interpret the measurement scales in any way that is useful to you.


"What does a Lehman rating of 53 really mean?"

Subtract your rating from 100. Find two opponents who have that rating (47.00), and a partner for yourself who has the same rating as you (53.00). Now play for a while at such a table. If everyone plays at their normal level, at the end of the session your matchpoint score will be 53%, theirs will be 47%, and everyone's Lehman rating will stay the same. That is, the sponsors assume your partnership will beat theirs by that point spread. (If this is IMPs, the sponsors expect you to win by the number of IMPs that translates back to 53%: currently about +0.45 per board.) The Lehman rating number simply means that that's the scratch score you and an equally rated partner should expect to get against two opponents whose ratings are the opposite of yours. (Another example: if you have a 45 rating, you and another 45 merely have to play 45% against two 55's to hold your ratings steady; if you play better than that, say 48%, even if you "lose" at the table your rating goes up, because you did better than expected!)

To calculate the amount that you and an actual partner need to win against two given opponents (i.e. the point spread predicted by the sponsors when they invest their smolians), simply add your partner's and your own ratings together and divide that into the sum of all four players' ratings. (For example: if you have 49, partner 53, opponents 51 and 42, you need a 52.3% game: 102/195.) If your result at the table is higher than that, partner's and your own ratings go up; if lower, they go down. (Note: as explained above, "result at the table" here means the result at the final reckoning of the week, when the board is officially scored, not the estimated result you see immediately after playing.)


For further details about the principles and calculations of the system, see the fuller explanation.


© 1998, Bradley Lehman (and see also my character sketch at the OKbridge picture gallery)