Possibility of covering an incomplete chessboard with dominos, without gaps or intersections, is (in the case when 2 diagonally opposite corners on a standard board are removed) a classical example where impossibility of a cover can be formally proven directly, but such a direct proof would be in practise too tedious for a human, while introducing a new idea, natural for a human but not immediate in a direct formalisation, of colouring chessboard squares in 2 colours, black and white, and noticing that a domino always cover equal number, 1, of each of the colours, while unequal number of those are present on the truncated board, establishes impossibility of such a cover immediately. Meanwhile, counting the _number_ of such coverings (when a cover is possible), is among classical problems in combinatorics, or in statistical physics, and for example in the case of full board has been solved by methods discovered by Kasteleyn; Temperley Fisher; and Lieb in the 60's. While for small enough and finite boards counting by an exhaustive search can also be done by a computer, reproducing in those cases more creative results achieved by the humans (and answering by brute force questions of possibility or impossibility of covers, enjoyed by philosophers and logicians, as a corollary). With rapid advances of machine learning and AI, we were curious how much of that would be accessible to present-day AI, and how creative it would be in tackling such problems. In my talk, and a follow-up paper, I will discuss what we have found. The results were essentially disappointing, as they exhibited little mathematically-valid creativity, and rather displayed known problems of foundational models such as hallucination and false confidence (and, under the hood, apparently very large formal VC dimensions of the transformers models, compared to the training set sizes). Some of the highlights of what we found. A LLM model, of chatGPT kind, was able to correctly inform us on impossibility of covering with dominos a standard chessboard with diagonally opposite corners removed, and to write a Lean program making this explanation formal, and to provide us with the correct count of the number of coverings of the full board. But those results were essentially in the training set, and did not require much creativity. With a small modification, corners now removed on the same side, we found that what was so far learned by AI was quickly abandoned. We were confidently told that covering is still impossible, for the same reason of parity mismatch, although in this case, unlike the previous one, the black and white counts are equal; and actually a direct search would easily (for a computer) find a large (for a human) number, 2436304, of possible inequivalent covers. Moreover, we nevertheless were offered a Lean code "proving" that such cover was _impossible_, with a caveat that the theorem coded in Lean used a miracle procedure "sorry", as a placeholder for "the proof is not yet complete." We proceeded to interrogate AI on coverings of full or truncated boards by longer 1 x n dominos. In some sufficiently simple cases, for example when number of remaining on the board squares was not divisible by n, or when n was larger than any of the board measurements, impossibility of covers was correctly established by AI. Yet in cases where there was no immediately obvious reason, either way, the feedback was typically not mathematically sound (though plausible linguistically). Further, we explored whether in the case of 1 x 3 domino covers the AI will try a new creative element. It seems natural that a human will try colouring the board in 3 colours in that case, trying to generalise success of black and white colourings for 1 x 2 dominos problems. In fact, 3 colours do help, in some of 1 x 3 colouring problems. However, we did not find signs that AI would explore this (at least if not directly prompted). We hope that our investigation can contribute to the discussion of the AI meaning and its role in mathematical proofs.