
The role is for someone with long, PhD level experience in «chemical weapons and/or explosives defence,» the LinkedIn post says.
It would be helpful if the person has an «understanding of radiological materials,» the posting goes on, and says the candidate will be «tackling critical problems in preventing catastrophic misuse.»
OpenAI is not far behind in worrying about these issues, and also has a job post open for much the same, but they are looking for someone with machine learning experience from red-teaming in order to safeguard their AI’s responses.
Using any AI for developing these kinds of weapons is of course against all the labs’ terms of use, but as the models grow more capable, they also need more safeguards.
Read more: Anthropic’s job post, OpenAI’s job post, writeups on the BBC and Mashable.
