BBO Discussion Forums: Could AI revolutionize / improve Bridge scoring anytime soon? - BBO Discussion Forums

Jump to content

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

Could AI revolutionize / improve Bridge scoring anytime soon?

#1 User is offline   EzioBridge 

  • Pip
  • Group: Members
  • Posts: 8
  • Joined: 2020-January-15

Posted 2025-December-14, 10:53

I'm just curious.


Paraphrasing a response post made by Barmar in 2012(!)...

'MP scoring only looks at the scores, it can't tell WHY you got the score you got. You may get a high % score either through good play, or through poor play by your opponents ('a gift')...
But unless we decide to replace matchpoint scoring with judges who examine the play and decide who actually played "better", the best we can do is look at the scores...'

So that begs the question.
Now that AI can probably (easily?) distinguish between 'good' play at Bridge, and 'poor' play by an Opponent, would it be possible and desirable to create a new form of scoring whereby AI effectively acted as a 'judge' to determine what % score you should get for a given hand?
Critically, your % + your Opp's % *need not* then sum to 100%.
You might play a board exactly on a par with others (so score 50%), but if you made an extra trick due to your Opponents bad error, they might only score (say) 30% on the same deal.
That seems a lot fairer than the current system whereby you might score 70% on the board (undeserved), and your Opponents 30%.



It seems to me that would be a very desirable scoring system, and eminantly possible if playing online.
Has it already been considered, or pursued?

I'm just curious, no more.
0

#2 User is offline   mike777 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,567
  • Joined: 2003-October-07
  • Gender:Male

Posted 2025-December-14, 11:37

AI will certainly change bridge.
Perhaps overlooked is how will AI change human evolution, human biology.
Example
Eye glasses change human vision
Hearing aids change human hearing
AI changes human evolution, biology, how?

If AI can improve human memory and speed/capacity of analysis then bridge...??
0

#3 User is offline   awm 

  • PipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 8,621
  • Joined: 2005-February-09
  • Gender:Male
  • Location:Zurich, Switzerland

Posted 2025-December-14, 16:02

You seem to greatly overestimate the degree to which AIs understand bridge. The current iterations of LLMs (i.e. ChatGPT5.2 and Gemini3) do not understand bridge at all. While it would be probably possible to train a special purpose AI that does understand bridge, there isn't much incentive for any deep-pocketed organisation to push for this. There's a good article about this on bridgewinners if you're interested.
Adam W. Meyerson
a.k.a. Appeal Without Merit
0

#4 User is offline   pescetom 

  • PipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 9,125
  • Joined: 2014-February-18
  • Gender:Male
  • Location:Italy

Posted 2025-December-14, 16:06

View PostEzioBridge, on 2025-December-14, 10:53, said:

I'm just curious.


Paraphrasing a response post made by Barmar in 2012(!)...

'MP scoring only looks at the scores, it can't tell WHY you got the score you got. You may get a high % score either through good play, or through poor play by your opponents ('a gift')...
But unless we decide to replace matchpoint scoring with judges who examine the play and decide who actually played "better", the best we can do is look at the scores...'

So that begs the question.
Now that AI can probably (easily?) distinguish between 'good' play at Bridge, and 'poor' play by an Opponent, would it be possible and desirable to create a new form of scoring whereby AI effectively acted as a 'judge' to determine what % score you should get for a given hand?
Critically, your % + your Opp's % *need not* then sum to 100%.
You might play a board exactly on a par with others (so score 50%), but if you made an extra trick due to your Opponents bad error, they might only score (say) 30% on the same deal.
That seems a lot fairer than the current system whereby you might score 70% on the board (undeserved), and your Opponents 30%.



It seems to me that would be a very desirable scoring system, and eminantly possible if playing online.
Has it already been considered, or pursued?

I'm just curious, no more.


There is no need to invoke AI if we want to change the rules to avoid rewarding side A for mere errors of side B. Simple algorithms could already identify many clear errors (or gross deviations from the rest of the field and/or expected result) and avoid rewarding the other side.
My impression of current AI is that it is nowhere near being able to identify the remaining outlying cases. I concede that this could change quickly.
0

#5 User is offline   mycroft 

  • Secretary Bird
  • PipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 8,154
  • Joined: 2003-July-12
  • Gender:Male
  • Location:Calgary, D18; Chapala, D16

Posted 2025-December-14, 18:07

Hmm. My partner of many many years ago pointed out something to me, that I have never forgotten: "To be a really good bridge player, you have to have a small sadistic streak. Not anything massive, but you *have to* take enjoyment in setting a trap for the opponents and having them fall for it."

I am also reminded of a 3.5 table 99er game that, because there were four directors staffed for what really was a 3.5 director table count, I had enough "spare time" to play about 2 boards of the 4-board sitout with the caddymaster and teacher of about half the field. Of course it didn't count, of course if a call came, I left the table, of course... but it alleviated the half-hour sitout some.

On one hand, the teacher/dummy said "you took a very long time playing to trick 1. I think the opponents would be interested in what you were thinking about. Why don't we put the cards back up and you walk through it?"

Which, normally, would have been a really good suggestion (and, frankly, was here, too, but). Unfortunately, I had to start by saying "So, when I look at the hand, I see 8 tricks in 3NT, with no way to get a ninth without giving up the lead. And the opening lead was the best, and when the defence gets in, they can cash enough tricks in that suit to set me. So, how do I play the hand to look like I'm happy for the lead to be continued, and suggest they switch?" And then walked through exactly that, how this suit will never be switched to because dummy, but this one could look weak, but then I have to give up the lead before they get a chance to signal,...

Now, for this story, of course it worked, they switched, and I wrapped 3NT. And yes, they were novices and I - wasn't. Which doesn't make me feel very good about it, and particularly didn't make me feel very good about explaining that "how to hoodwink you two newbies into giving me the contract" was what took so long...but, as the first quote reminds us, this is a very important bridge skill, one that you should be proud of (quietly, at least until the opponents leave,...) when you do it.

All this to say: even if we get spicy autocomplete to the point where it is able to tell the difference between good play and poor defence, will it be able to tell the difference between poor defence and good play-induced poor defence?
Long live the Republic-k. -- Major General J. Golding Frederick (tSCoSI)
0

#6 User is offline   Huibertus 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 410
  • Joined: 2020-June-26

Posted Yesterday, 12:56

Currently AI surely isn't anywhere near good enough at bridge to even consider basing scoring on it's opinions.

IF there ever comes a day AI bridge IS better then human Bridge it STILL is a very bad idea.
- It is true bad plays can lead to good scores on a board. That is one of the charms of the game, smile and be nice to the opponent that benefits from playing poorly. You are SURE they play poorly more often, you'll more then get compensated.
- Bridge is about doing the right thing more often then the opponents, not about doing the right thing on every single board.
- Sometimes (not always) the right play of a hand is up for debate, not objective. Like, during bidding you are sure there is overbidding, but who do you assess to be the culprit and why? An opponent discards to complete your count on his hand, does he do this on purpose so you'll finesse. way and should you therefore play for the drop? Or does he lack the skills? Or does he have the skills but WANTS to ensure you play for the drop?
- One opponent must be falsecarding his distribution. Which one? And why?
- What if AI finds out for instance some relay precision bidding systems are best at finding the best contract on average, will it then penalize inferior natural systems even if they find the right contracts because it wasn't bid out the right way?

IMHO the idea of basing scoring on the opinion of a juror should be reserved for a circus, it has no place in Bridge.
0

#7 User is offline   Huibertus 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 410
  • Joined: 2020-June-26

Posted Yesterday, 13:10

IMHO a scoring system based on the opinion of a juror even if it is AI should be reserved for circus acts, it has no place in Bridge.

- Bridge is NOT about doing the right thing on every single hand, that's simply impossible. It is about doing the right thing more often then opponents.
- What IS right is debatable on MANY occasions.
1. You correctly assess during bidding someone must be overbidding as there are 50HCP on this board. Who is it and why. And why is your assessment better or worse than others?
2. You have a suit you must play without a loser, you can play for a finesse or a drop. One opponent discards in other suits in a way you have full count on his hand. Does he do this on purpose to trick you into playing for the drop? Or does he lack the skills to even consider such a thing? Or does he KNOW you'll consider him tricking you and does he want you to not belief him? What if he had NOT discarded to give you full count?
3. What if AI establishes that Relay precision systems are best for finding the right contracts most of the times, will it penalize natural systems for finding the right contract too when they do because it was done by means of inferior bidding?
4. Lesser player having a good score because of bad play simply is part of the fun of bridge. Just smile, compliment them and realize they will play badly on lots of occasions and will more then compensate you (that is IF your assessment of them play bad was right, I personally have adjusted my assessment of opponents over time having been wrong initially).

All in all a bad idea. It should not happen.
0

#8 User is online   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,394
  • Joined: 2004-April-22
  • Gender:Female
  • Location:Odense, Denmark
  • Interests:History, languages

Posted Yesterday, 16:24

I have been toying with something similar, although as a statistician I would be talking about statistical modeling rather than AI. But it is similar.

For example, you could look at actions which players do that reduced their PAR score for a given board (say passing a partscore when game is on, or fail to find the killing opening lead) and then see how frequent and how costly different kind of mistakes are at different levels. This wouldn't tell you how good a pair is overall (well, it could, but traditional MP or IMP scoring would probably be more accurate) but it could tell a partnership what they have to work on, or it could tell teachers what to emphasize for a particular type of learning players.

Somewhat closer to what you aim at, one could look at how often players make choices that differ from those made by a robot which plays much better than the players themselves. Obviously this would only make sense if the robot had the same partnership understanding and the same understanding of the four players' weaknesses as the players have, but probably one day that will be more or less feasible.

But again, I don't think replacing MP or IMP with such metrics will be popular. Suppose that one day such a deviance-from-robot-choices metrics would be better at prediction long-term MP than the MP themselves. I can imagine that it would then be used for selection of teammates, but I find it harder to believe that it would be used to award prices.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#9 User is offline   akwoo 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,656
  • Joined: 2010-November-21

Posted Yesterday, 17:25

Similar but not too different from what helene is thinking, one could imagine building a statistical model that would infer how you played as a bridge player. It would take, say, 100 hands played by your partnership, compare it against a database of hundreds of thousands of hands played, and get a good sense of how you played. Then it could run a virtual tournament of a billion hands and simulate how you play in that tournament based on you actually played on the 100 boards.

This would probably be a much better long term predictor of your MP scores than your scores from those 100 boards, but would you really want that score to determine prizes? For one thing, a slightly inferior pair would never win...
0

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

2 User(s) are reading this topic
0 members, 2 guests, 0 anonymous users