Sikuli automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI’s internal or source code. In human-to-human communication, asking for information about tangible objects can be naturally accomplished by making direct visual references to them. For example, to ask a tour guide to explain more about a painting, we would say “tell me more about this” while pointing to picture . Giving verbal commands involving tangible objects can also be naturally accomplished by making similar visual refer- ences.
Sikuli Makers: Our screenshot search system, Sikuli Search, consists of three components: a screenshot search engine, a user inter- face for querying the search engine, and a user interface for adding screenshots with custom annotations to the index.
On Linux/Unix systems you need valid installations of OpenCV 2.2+ and Tesseract 3 before you can run setup or use Sikuli.
On Windows: Sikuli detects the Java version at runtime and switches the native libraries 32-Bit/64-Bit on the fly.
Python scripting is well supported by the Sikuli-IDE (JRuby available with version 1.1.0 and more scripting languages to come).