Taylor Swift tried to sue Microsoft over a chatbot which posted racist messages on Twitter, the president of the tech company has revealed.
Taylor's lawyers made a move on Microsoft in 2016, according to a new biography by its boss Brad Smith.
She was unhappy with the name of its chatbot Tay, meant to interact with 18 to 24-year-olds online, because it was similar to hers.
If you don't remember TayTweets, it's the Twitter chatbot that turned racist.
What was TayTweets?
TayTweets was controlled by artificial intelligence and was designed to learn from conversations held on social media.
But shortly after Tay was launched, it tweeted to say it supported genocide and didn't believe the holocaust happened - among other things.
Microsoft issued an apology and took Tay offline after less than 18-hours of offensive conversations on Twitter.
Taylor Swift's legal action wasn't about what the chatbot had said online, but instead about the similarity to her own name.
"I was on vacation when I made the mistake of looking at my phone during dinner," Brad Smith writes in his new book, Tools and Weapons, reports the Guardian.