Check out Atomic Chess, our featured variant for November, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Aberg variation of Capablanca's Chess. Different setup and castling rules. (10x8, Cells: 80) [All Comments] [Add Comment or Rating]
Reinhard Scharnagl wrote on Sun, Apr 20, 2008 07:57 AM UTC:
To Derek: we know how much work you have invested in your piece value theory, so I understand, that you are somehow enraged on H.G.M.'s interpretations of his attempts. But all individuals I know to be investigating in that matter are strong-minded people. Thus please do not misinterpret their persisting in their viewpoints as pure animosity. 

To all: I understand, that the clearness and consistence of a value defining model is not enough to convince doubters to the 'truth' of such models. That way generated values have to be verified in practise. The easy part of that is to compare such figures to those experienced from 8x8 chess through centuries. The difficult rest of verification is to apply claimed value scales e.g. in 10x8 and to check out if they are well-working.

But it is unsufficient, to simply optimize a bunch of values within a given variant, because that does not establish a neutral theory, which could be applied on other scenarios, to be falsified or verified therein. A valid theory's conclusions have to exceed their input by magnitudes.

Watching the results of H.G.M.'s very interesting 'Battle of the Goths' experiments, what does this induce for our value theory discussion? In my opinion, there hardly could be derived anything concerning this question. Of course, some games have to be reviewed intensively for to see, whether there would have been structural imbalances. But to me it seems impossible to separate those engines' positional abilities from their tactical power, which is obviously very depending from the maturity of their implementation.

My program SMIRF is - as repeatedly stated - my first self-written playing chessengine, also often repaired and modified, but still caught in its initial naive design with a lot of detected basic weaknesses. Its detail evaluation as an example is incredible slow. Mating phases of games lead to concurring incompatible evaluations in SMIRF, thus some games will be lost even though having a clear mating line in view. SMIRF has been programmed without using foreign sources. By all of that it is no ripe engine - and thus I plan to put my experiences into a follow-up engine Octopus, which nevertheless will need a lot of time.

Derek and I have experimented with having different models applied to equal engines, identical beside of those different value approaches. Though this seems to be the more relyable approach for to verify value models, it nevertheless has structural weaknesses too, as in the realization of such a program there will be a lot of parts, reflecting the ideas of its creator, making it not completely independent of the ideas of that programmer.

So what is the arriving conclusion from H.G.M.'s event? SMIRF has to be rewritten as Octopus to become more mature. And maybe H.G.M. might try to embed his value model within a verificatable abstract theory, if he would like to widen its acceptance.