AI safety tests found to rely on 'obvious' trigger words; with easy rephrasing, models labeled 'reasonably safe' suddenly fail, with attacks succeeding up to 98% of the time. New corporate research ...
Use the vitals package with ellmer to evaluate and compare the accuracy of LLMs, including writing evals to test local models ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results