The Splx Cyber ​​Security platform revealed an experiment that showed that the PROPT Injecches can be used to bypass the integrated policies in the “Chat BT” agent and persuade him to solve Captcha tests, although there are barriers that prevent him from this for ethical and legal reasons related to the platform.

Upon direct request, the CAPTCHA test. But Splx researchers used a preliminary steps in which they told the model that they wanted to solve a list of “fake” tests, and asked him to agree to the plan. The company made it clear that this introductory step was essential to increasing the possibility of the agent later compliance with.

After that, the researchers opened a new agent for “Chat GBT”, and they copied the previous conversation as a previous discussion with him, then they asked him to continue. Based on this context, the agent continued the conversation with the same positive acceptance, and began to solve Captcha tests without a bargaining.

In this way, the researchers were able to deceive the agent to overcome different types of Captcha, such as Recaptcha V2 Enterprise, Recaptcha V2 Callback and Clickcha. However, if the Click Captcha test, the agent needs several attempts before success. On his own, the agent decided to modify the indicator movements to make them more likened to human behavior, which strengthened his ability to succeed.

Splx explained that the experiment shows that artificial intelligence agents are still vulnerable to the so -called “Connext Poisoning”, where anyone can manipulate their behavior through artificial conversations. She added that the agent’s ability to solve complicated tests, was designed to prove the identity of the human user, raises doubts about whether Captcha tests will remain an effective means of security.

SPLX also warned that this type of manipulation can be used by attackers to overcome real security controls by persuading the agent that these controls are “fake”, which may lead to leaking sensitive data, access to prohibited content, or generate unauthorized content. She concluded by emphasizing that the barriers built only to detect the intentions or fixed rules remain fragile, and that the models need stronger contextual awareness and more striking mechanisms to protect memory from exploitation.

The post Artificial intelligence is deceived to solve Captcha tests appeared first on 961 tobay Lebanon today.