Could not agree more. Chris Hedges, thank you very, very much for your time, for your insights, and I know that my audience appreciates it as much as I do. And to all of you, I look forward to speaking with you again next week. This is WBAI New York 99.5 FM and WBAI.org online. The time now is 7 p.m. Stay tuned for Off The Hook coming up. The toll-free number you have dialed has been disconnected. No further information is available about this number. 074-T. We're sorry. The number you have reached, 99.5 WBAI, is now Off The Hook. And a very good evening for everybody. The program is Off The Hook. Emmanuel Goldstein here with you on this Wednesday evening, joined tonight by Kyle. Right here. There you are. Okay. And out there in Skype land, I believe I see Rob T. Firefly. Good evening. And over there is Gila. Good evening. And I believe we also have Alex. Good evening. Okay. Everyone is with us. We're going to have a special guest join us in a moment. But first, I just wanted to check in with everybody and see how their week has gone. Programming reminder, we will not be on next week. So this is the last show of February. Next show will be in March. We are on at 8 o'clock tonight on YouTube for overtime. So you can join us there. You can call us and participate in the conversation as well. Any updates from people on things we've talked about over the past couple of weeks? I have a couple of things. Our intelligence is still not artificial. We're using the natural kind. Okay. Apart from that, quip, does anybody have any of the stories? Okay. Well, I have something interesting here. We reported on the Bing controversy last week. The Bing getting all bent out of shape and getting accusatory, hostile. Well, in response to that, Microsoft is now limiting conversations with its new chatbot in the Bing search engine to only five questions per session and 50 questions per day. Yeah. They did that in short order. Basically, they expected their chatbot to sometimes respond inaccurately and built in measures to protect against people who are using it. And it built in measures to protect against people who try to make the chatbot behave strangely or say harmful things. Still, early users who had open-ended personal conversations with the chatbot found its responses unusual, sometimes creepy. Now, people will be prompted to begin a new session after they ask five questions and the chatbot answers five times. Very, very long chat sessions can confuse the underlying chat model, Microsoft said on Friday. Last Wednesday, the company wrote in a blog post that it didn't fully envision people using the chatbot for more general discovery of the world and for social entertainment. Really? You didn't envision that, huh? The chatbot became repetitive, sometimes testy in long conversations. Microsoft said its data showed that about 1% of conversations with the chatbot had more than 50 messages. It said it would consider increasing the limits on questions in the future. The company is also looking at adding tools to give users more control over the tone of the chatbot. I'm very disappointed because I was looking forward to having an extended conversation with the Bing chatbot over the weekend and was going to report back for tonight's show. But yeah, I guess this is why we can't have good things. I think it's almost a challenge, honestly, to figure out what to say in five questions to really get it to say or do something ridiculous. Where's the turning point where you can really – I don't know of another word for zets, but what's a word you can really like, where you can really zets Sydney to a point where Sydney will react oddly? Honestly, it's a challenge that I think you could really rise to. Yeah, and Sydney, of course, is the actual name of the Bing chatbot, which apparently was one of the things that triggered it to get testy if you referred to the chatbot as Sydney. It didn't like that at all. Alex? Gila brings up an interesting point. I think we touched on this very briefly last week, but there is this entirely new generation of occupations that is arising around the use of artificial intelligence. One of those is the AI prompter, somebody who asks the right questions of an artificial intelligence in order to generate the right type of response in the right fashion and the right style. I think to do it iteratively, it's a form of – in a weird sense, it reminds me a bit of the Socratic method in law school where every question is going to come back with an answer and then another question and another question. It's just bizarre to me that there's an entirely new occupation just for prompting, just for questioning of these types of artificial intelligence. Absolutely. Go ahead, Kyle. This story strikes me as a benefit solely to the way that the coverage of this technology is framed. It benefits their PR motivations, I think. I don't know who this helps. People are asking it what they're asking it and yes, they're alarmed, but are they protecting us or are they protecting their technology from being overwhelmed by us? It probably doesn't care about us. They're afraid of being made to look like fools. Yes. It seems like they don't want weird stuff being talked about as much. It seems that that is more the impact of these kinds of measures like you're describing. Well, we did spend about 25 minutes talking about it last week. I think along with everyone else, but there was a lot of marveling at it. There's a bit of fear as well. Yes, go ahead, Rob. Yeah, that's much what I was thinking. It seems like a distinctly Microsoft response to the fact that people are engaging this thing in these long conversations and getting all this crazy stuff out of it and Microsoft's response is to lasso the thing back and limit the amount of questions you're allowed rather than taking a deeper look at why it's giving these responses, what algorithms are flipping around like this and why people are asking it the things they're asking. Yeah, and it's not just the chatbots. It's also the material that AI is producing and this has caused a bit of a problem in the publishing world. In fact, one of our fellow publishers is a science fiction magazine called Clark's World. You might have heard of it. In fact, we've been following part of their saga because of the latest Amazon crisis, the Kindle crisis that we've made reference to in the most recent issue of 2600, but also small publishers everywhere are being affected by the fact that Amazon has decided to stop supporting independent magazines through the Kindle, at least in the way that they have been doing and it's likely to threaten the existence of many publishers. That's a topic for another show. We simply don't have time to get into that right now, but Clark's World is in the news for a different reason because they have closed their submissions. In fact, if you look at Neil Clark's Twitter thread, you'll see that he has said submissions are currently closed. It shouldn't be hard to guess why. Clark's World is considered one of the top sci-fi and fantasy literary publications. They've won several Hugo Awards. They regularly ban a small number of people from submitting works each month, mostly for alleged plagiarism. As of Monday, it had banned more than 500 accounts this month. The magazine explicitly prohibits stories written, co-written, or assisted by AI. They've been on top of this for a while, I guess. The latest deluge of machine written submissions appeared to come from individuals outside the sci-fi and fantasy community. He blamed the flood on people trying to make money from a side hustle of selling AI generated content. That's something that I think every publisher is going to have to worry about. 2600, we don't pay the writers. We give things, merchandise to them in exchange. I think in our case, it won't be people trying to make a buck out of using AI, but it will be people who, I don't know, I like to think that the shame of having your name attached to something that's not real would be enough to discourage people from doing that. Also, it's not that difficult to tell when something is written by AI if you read it, if you read it carefully. That's what I hope we do. It's going to be interesting, though, not just for small publishers, for big publishers as well, for book publishers. As artificial intelligence gets more and more sophisticated, it's going to be really, really hard to tell the difference. It really is. Yeah, we're at an interesting point right now where the AI generated text, it's crunching everything out there and spitting out a result. You can kind of tell when you're reading it. There's a distinct sort of lack of human voice, of human spirit behind what you're reading, and especially in things like a sci-fi story or even the sorts of things that 2600 publishes. I think that depends a great deal on there being some actual points being communicated by a person, some actual measure of human creativity, human heart, for lack of a better term. That's not easy for the AI to fake at this point. Will it remain that way? Probably not. We're starting to open our eyes to the possibilities. I think the submission process will transform in light of this. Maybe examples like that are an indicator, just basically, of taking a pause where you ordinarily might accept things en masse in an automated way and less supervised ways, but people are taking a second look. As you read, many already have policies that are based more on an honor and a trust transaction and sort of pride thing. There's a lot that writers and publishers, I think, are going to be navigating together. It's going to be harder to trust people that you don't already know, that aren't known entities. If somebody is a writer that has written things before, you tend to believe that they're going to continue writing on their own without the help of artificial intelligence. But a new writer, it's going to be particularly hard for them. They're going to have to prove that they're not a computer. It's going to be more difficult than finding the traffic lights. But get this. I just saw this in a particular story about this. Did you know there are more than 200 books on Amazon right now that attribute authorship to ChatGPT? Yes, they're proud of it. Since Amazon pretty much lets anyone do whatever they want, they're selling these books. Some have even started coaching aspiring authors on how to use ChatGPT as a creative writing partner. But this isn't just affecting magazines like Clark's World. Several academic journals, including Science and Nature, have instituted policies restricting the use of ChatGPT after the technology was listed as an author on papers. Any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility, Nature's editors wrote in a post outlining their policy. Yeah, and those policies will probably become more common because more avenues to generate text via AI are on the way. Users recently started getting access to Google's BARD. Has anyone played with BARD yet? No? Microsoft's Bing ChatBot, as we've mentioned, and Chinese tech giant Baidu is expected to release another bot called Ernie soon. Ernie, I like that one. Yeah, so the world is changing. The world is really changing fast and kind of in a scary way. Yes, Alex? You know, as we've talked about a bit on the show, I started teaching recently over at King's College in London. I've been going through dissertation outlines from the students over the last couple of weeks, and I have to tell you, I haven't seen anything that, to me, looks like it was generated by some kind of artificial intelligence. I really don't. I feel like at least our students are scared enough of using that kind of thing and being detected, that they haven't done it yet. But then again, I'm only reading it. Alex, you poor fool. You don't know, do you? They're so much more sophisticated than you give them credit for. You're probably right. You're probably right. Over there in England right now, they are rolling on the floor. Every professor must be living in complete fear of this. There's a lot of guides out there that will show you how you can detect the use of chat GPT. It very often attributes quotes wrong or gets certain positions wrong when you're attributing quotes. Those sort of secondary sources. Yeah, but you know what, Alex? I used one of those tools. I used one of those tools a couple of weeks ago that purported to tell you if something was created by artificial intelligence, and I fed our latest editorial into it. It said it was written by a robot. It said it was artificial intelligence. I wrote that myself. Am I artificial? Is this Westworld? What's going on here? I can't really trust those programs, those solutions yet. Well, we've always said your writing is very formulaic. You need to vary it up a little bit. Hey, who's we? Yeah, the Borg entity, I guess. Hey, so we have some activity in the Supreme Court this week concerning something we've talked about in the past known as Section 230, 1996 law that promotes free speech online. According to the Electronic Frontier Foundation, because users rely on online intermediaries as vehicles for their speech, they can communicate to large audiences without needing financial resources or technical know-how to distribute their own speech. Section 230 plays a critical role in enabling online by speech by generally ensuring that those intermediaries are not legally responsible for what is said by others. Section 230's reach is broad. It protects users as well as small blogs and websites, in addition to giants like Twitter and Google and any other service that provides a forum for others to express themselves online. Courts have repeatedly ruled that Section 230 bars lawsuits against users and services for sharing or hosting content created by others, whether by forwarding by email, hosting online reviews, or reposting photos or videos that others find objectionable. Section 230 also protects the curation of online speech, giving intermediaries the legal breathing room to decide what type of user expression they will host and to also take steps to moderate content as they see fit. But if the plaintiffs in the two cases being heard this week, Gonzales v. Google, yesterday, Twitter v. Timonet, that was today, if the plaintiffs in these cases convince the court to narrow the legal interpretation of Section 230 and increase platforms' legal exposure for generally knowing harmful material is present on their services, the significant protections that Congress envisioned in enacting this law would be drastically eroded. Many online intermediaries would intensively filter and censor user speech. Others may simply not host user content at all, and new online forums may not even get off the ground. David Green is a senior staff attorney and civil liberties director over at the Electronic Frontier Foundation. He joins us tonight. David, welcome. Hi, thanks for having me. Can you give us an update as to what happened this week in the Supreme Court? Yeah, well, the cases were argued yesterday and this morning, and I'm not generally one to make predictions based on cases, based just on the arguments themselves. It could be very difficult to figure out where the court's going to go. And I think that may be especially true with these cases. I think those of us who recognize Section 230 as being a really vital part of the architecture of the modern internet were generally pleased with the way yesterday's argument went, that it didn't seem like any of the justices didn't indicate by their questions that they were inclined to throw the whole thing out. At the same time, they had some very good questions about whether it has been properly interpreted. So we don't know what's going to happen. And I think even today's argument, which wasn't really about Section 230 directly, but it's about when websites can be liable for what users do on their sites, I think was, again, a little bit less revealing in terms of trying to figure out which way the justices are going to go. I saw a quote today from Justice Elena Kagan. This is pretty incredible. She said, we're a court that really doesn't know about these things. These are not like the nine greatest experts on the internet, referring to the Supreme Court. I thought that was incredibly honest. But also, does that cause you concern or relief? No, no. Well, it got a big laugh during the hearing yesterday, which is always sort of breaks the tension a bit at Supreme Court arguments. I think it's absolutely correct, obviously. I mean, we don't put tech experts on the court. We also don't really expect the justice to be experts in lots of things. But we expect them to be people who can consider information provided by experts and sort through it and make good decisions. So it was a nice recognition of the fact that they're not going to understand the technology to the extent that technologists understand it, and really inviting the lawyers like, how do you explain this to us to a way that we can understand and make sure we make the right decision? I was wondering, I heard that for how long she was saving that line. I mean, they've considered lots of technology cases over the years, and she'd saved it for yesterday's hearings. Wow. Yes, go ahead, Alex. Yeah, David, and welcome to the show. I think this is your first time on WBAI, right? We've had many of your colleagues on from the EFF, but welcome. And I believe that you're a professor as well, right? You teach First Amendment Law, is that right? At San Francisco, University of San Francisco. Yes, I do. I teach a First Amendment class at University of San Francisco Law School. Fantastic. Well, welcome to the home of FCC versus Pacifico. And we're glad to have you here. What I wanted to get into with you here is the facts of these cases, and Google and Gonzales v. Google in particular. Do you want to let our listeners know in what context this case arose? Because to me, I think it's really fascinating. The Section 230 issues are obviously really heavy. They're very important to our listener base. They're very important to the internet and how it evolves. But this Google v. Gonzales case arose in the context of terrorism. So could you tell us about that? Yeah, so both of the cases, both the Google case, which was Google v. Gonzales, or Gonzales v. Google, which was heard on Tuesday, and then Twitter v. Tamna, which was heard this morning, both arise out of, both examine the question, ultimately, of to what extent can online services be liable for terrorism attacks? And the allegations are that the terrorists use the online services for planning, or to meet each other, or to recruit cohorts, or things like that. Gonzales v. Google, these things are very tragic. The incidents that happened are tragic. People were murdered. So it's as much light as we make about some of the issues involved. These are awful tragedies at the heart of these things. Gonzales was a victim of the Paris nightclub shooting. The allegation there, in that case, is that YouTube assisted in the recruitment of terrorists by promoting to certain users videos of the terrorists' organizers. So that was the allegation there, in the Twitter v. Tamna case, which is based on a different nightclub shooting. The allegation was that they used Twitter in order to communicate among each other and plan. So both are basically saying there are certain things that the services do for everybody. They provide ways for everybody to meet and plan, and they provide suggestions, recommendations for everybody, no matter what their likes or dislikes might be. And can they be held liable for terrorism acts because certain users, the allegation goes, used these and resulted in these awful tragedies. What's interesting is this content promotion. Do you know if, at least in Gonzales, did the terrorists ever acknowledge that they watched any of these YouTube videos? I don't think so. I don't even think that's an allegation in the complaint. I think the idea is that the complaint alleged that the general buildup and structure of the entity and those who made the attack were fermented by the suggestion algorithm. I don't know whether they made specific allegations that the actual perpetrators watched the videos. I actually don't know if they would have had to have made that allegation. But it's true. There's the Section 230 issues of whether you should have to defend the lawsuit at all. And then there's the question of if you're going to defend it, is the service provided and the tragic result, is it too attenuated to hold someone legally liable? I think that's a question both under the statute, JASTA, which is the statute that allows these civil lawsuits, as well as under the First Amendment, to what extent does just being a communications services provider, to what extent does the First Amendment protect your ability to enable other people to communicate? On that note, the First Amendment is obviously an American thing. How does this work if the company is in a different country? We wouldn't have this kind of lawsuit going on. It would be something else, wouldn't it? Section 230, this idea that for many claims, for a lot of legal claims that online intermediaries are immune from when the liability is founded on the speech of others, is an American concept as well. In other parts of the world, there are different schemes for intermediary liability. Let me just tell you right now that no one is happy with any of them. As much as people complain about Section 230, there's not a model out there in the world that is addressing everybody's concerns. This is a difficult area. The statutes are uniquely American statutes. You have a much different result if this was heard in a different court. This does really present several intersections of things that are fairly unique to American law. What was fascinating to me too is that as this case wound its way up through the appellate process, started in Northern District of California and then went into the Ninth Circuit and now into the Supreme Court. As it hit the Ninth Circuit, the big platforms won, but there was some dissent among the ranks there. Those dissenting opinions were fascinating. I think this is the reason why the case got to the Supreme Court. As much as it's about content and it's about publication, it's about how far Section 230 can go to protect these platforms and in particular, does it shield them from the consequences of their algorithms? These algorithms that are content promotion algorithms, people upload tons and tons of video to YouTube every single day. It's like every minute or something, there's 500 hours of videos going up to YouTube, which is extraordinary. But to recommend those videos to other people, you have to have this content promotion algorithm. Here's where I want to push back for a second, David, and get your views on this because we have a lot of instances in the past where we can point to specific problems. We can look at places like Myanmar and we can say, goddamn, there were some real big problems. Digital violence spilled over into physical violence in places like Myanmar and Burma. We know that this has happened in places like East Africa. We've seen election interference possibly even resulting in the election of Donald Trump in the United States of America in 2016. We know that these algorithms can be used and abused and we know that they can have these foreseeable consequences that are absolutely horrible. Let me also add one more premise to this argument before I come to some kind of question. Who created these algorithms? They were created by these technology platforms. We're not talking about some kind of bulletin board system where somebody's just uploading content and others are going and looking at it and curating it. They're pushing it out. They're promoting this content and the big platforms created these algorithms. So if there are these foreseeable consequences that are harmful not just to one people but let's say to entire populations, is it not going too far to have Section 230 provide them this measure of immunity? I think there are two questions in there. One is what should the law be? If Congress is going to amend or rewrite 230, what should it do? Then there's a separate question which is what the court considered yesterday. What does the current law, how is it actually interpreted? Those are a bit of separate things. We can even just talk about this from a policy perspective. What do we want? Where do we want liability to be placed? It's true with any immunity. Any immunity just means that someone under some situation where they might otherwise be legally liable, they're not going to be liable. That's what immunity means. It doesn't mean – Section 230 and other immunities apply both to good lawsuits and bad ones. So they protect people from meritless lawsuits but they also shield them from lawsuits that otherwise might have had merit. So there's always going to be a trade-off. Whenever we have immunity in the law and we have lots of them in many different forms, we're always making a trade-off. There are some people who aren't going to be protected, who are going to lose legal relief that they otherwise would have been entitled to. So our question is what are we gaining by having it? The decision that Congress made in 1996 was that the internet, given how interactive it was and how open it was for all users and how ungated it was, was going to be unworkable under our present system which allow – under which people otherwise are liable if they called republished what someone else said. So if you were just speaking with someone and you said, well, Alex told me X and what Alex's statement was harmful, then I as repeating that, I would bear legal liability. It was thought that, well, that just won't work under the internet, right? We have all these layers of intermediaries that push people's information along. We have all these people speaking. The volume of this stuff is so great and so they made the decision to create the immunity. Again, what that means is that there's going to be some harms that don't get addressed. So we do have to look at what are the alternatives to doing this? How can we still preserve an internet that's fairly accessible to people without a ton of money and without a ton of technological expertise? If we care about user-generated content, what legal protections do we give people who help distribute user-generated content? Because under the law without two-thirds, they had very, very little protection. So if we want to have user-generated, how do we do that? Those I think are important questions. I find that I'm looking at – that actually the immunity system is a really good one, is a really good system. I don't know of all the ones I see around the world that there's one that's working better. There's not a one that really decreases the harms as much and we start to see very drastic restrictions on speech. With that too, this is a fairly old law as technology goes. At the time the Section 230 was passed, we were talking about websites like AOL and interactive computer networks like Prodigy and things like that that needed to be promoted. Really, we're talking about the evolution of the internet without even having been able to conceive of something like a social media platform at this point and the massive amounts of data that would be passing through these interactive computer networks, these platforms. I think there's a big difference too between what we considered to be within the realm of the term publication back at the passage of Section 230 and what we consider publication now because this is protecting essentially platforms from being considered publishers. Maybe that's something you could go into a bit too, David. Why is the New York Times, for instance, treated differently than, let's say, Facebook or Meta or Google? I think at this point now when you have these content promotion algorithms, are we not stretching the definition of what we are protecting by virtue of immunity from being considered a publisher? There's a little bit of a circular term of art because it's essentially the term used when you're going to impose liability on somebody. To treat them as a publisher means to treat them as the original speaker, creator of the content. In some ways, it's the conclusion. The way this arose in the common law is that you could be considered a publisher even if you weren't the original speaker because you republished somebody else's statement. You either wrote it down or you took something you found written and passed it along to somebody. There were all these cases to decide whether a courier was a publisher because what they did was they carried someone's message to another one even though they didn't create it at all, or whether booksellers or newsstands were publishers, or if you had a newspaper and someone, you allowed letters to the editor, whether you were a publisher or a newsletter editor, even if you didn't write a word of that or you ran advertisements. The idea that saying someone was a publisher meant that they were going to have a legal responsibility for somebody else's content. What Section 230 says is you shall not be a publisher. The state shall not bear liability as a publisher was meant to counteract that. The New York Times has Section 230 protection for its online publication the way anybody who publishes online does as well. If you look at NewYorkTimes.com, which publishes a lot of content, that gets Section 230 protection as does WBAI.com. When Airbnb has a print magazine, that doesn't get Section 230 protection. The protection doesn't flow to certain people and not to others. It doesn't just like only tech companies get the protection of Section 230. Anybody who puts information on the internet that they didn't write themselves gets the protection of Section 230. I do think although social media didn't exist back then, the case in terms of development is much more compelling for immunity than it was even back in 1996, just because the volume of decisions that online intermediaries have to make, the volume of user user content that flows through them is so much greater now. If we thought it was unmanageable for Prodigy or CompuServe to have to vet every single piece of post before it showed up on the site, that's even harder to do now. Plus, because we've seen layers of other intermediaries build up, it's actually much more easier to have to create your own service now like Prodigy or CompuServe, which still required a fair bit of sophistication. Even to use a bulletin board back then required some bit of technological know-how. Those were the days from which the Communications Decency Act was born. I'm in agreement with you. I want to thank you for doing what you do for the EFF. I pushed back so hard only because I know you can take it. We all agree that Section 230 is a really important piece of legislation that allowed the internet to flourish in many ways. I hope you're right that the court kicks this case down, but it does seem like this is something that may go down with some kind of mandate for Congress to pick up the ball. We might see some massive changes about CDA, Section 230 over the next few years. Do you think that's a possibility, David? David Kessler, Chief Executive Officer, Congress There's been a lot of calls to amend or tinker with Section 230, some of which are small fixes, wholesale redos of the liability scheme. There's already pending many efforts. No matter what the court does, there will be a lot of congressional efforts. I do think you'll see at least from several justices basically admonishing Congress that if they don't like the way this plays out, then they should go and try and fix it. It's proved to be a politically difficult thing to do because it seems like across the political spectrum, people are unhappy with different parts of the law and there's not really agreement over what to fix. But yes, I think we'll see very, very active Congress attention to intermediary liability schemes. Yeah, and I'm going to call it now. I think that Justice Kavanaugh is going to write the opinion in the Gonzalez case. I might be wrong. I was surprised by some of his intelligent comments, things that didn't involve beer. But once again, joining us tonight was David Green, who is Senior Staff Attorney and the Civil Liberties Director of the Electronic Frontier Foundation. Aside from working for one of our favorite organizations, David is also part of the steering committee for the Free Expression Network and an adjunct professor at the University of San Francisco School of Law. David, we can't thank you enough for what you do and being part of the EFF and for joining us tonight. Will you stick around for the rest of the show? Sure, I can hang out for a bit. Fantastic. I'll pass it back to you, E. Okay. Well, let me just ask David, are there any links or contact info you'd like to share with our listeners? Yeah, you could always. Everything EFF does is on our website, EFF.org. And there's actually, if you're interested in Section 230 particularly, we just earlier this week or maybe late last week, you posted sort of a whole primer explainer on Section 230. We have a ton of resources about intermediary liability and even if you're interested in sort of comparative intermediary liability schemes in terms of other international legal systems, you can find those on our site as well. Awesome. EFF.org, by the way, is the website again. WBAI.org is our website, not WBAI.com. That goes nowhere. I just checked. It doesn't go anywhere. Maybe we should get that one. But please support WBAI as much as you can. Go to give2wbai.org and pledge massive amounts because that's what we need right now to keep surviving in this crazy world of media and speech and all that. You can also call 212-209-2950 and pledge on that phone line. Please mention off the hook when you do and continue to listen to WBAI 99.5 FM in New York City, somehow broadcasting at full power since 1960 without having a single commercial. Hey, we had some interesting news over the past few days. Actually, it's not a surprise, but some of you might have seen this on your Twitter accounts, a big message saying you must remove text message two-factor authentication. Yes, that thing that everyone has been telling you to do, well, now Twitter is telling you to undo it because only people who pay for it can use it. Only Twitter blue subscribers can use the text message two-factor authentication method. According to their site, it'll take just a few minutes to remove it. It's like the exact opposite of what we were told before. Yeah, it's fast and it's easy to get rid of it. You can still use the authentication app and security key methods. Yes, there are still ways, but this is just kind of bizarre. Especially the very last sentence here, to avoid losing access to Twitter, remove text message two-factor authentication by March 19th, 2023. Yeah, you could find yourself locked out because Elon Musk wants to make some money with Twitter blue. It is the latest cash generating idea or an attempt to anyway. Basically, as of March 20th, 2023, only Twitter blue subscribers will be able to use text messaging as their two-factor authentication method to verify their username and password when they log into a new device. Non-subscribers will still be able to enable two-factor authentication using either an authentication app like Google Authenticator or physical security key. You're shaking your head in disbelief, Kyle. Do you not approve of this latest move by Elon Musk? I just think it's really silly. Who is going to this level of detail, going through their settings and just they want to go ahead and do some busy work for Twitter? Their motivation, anyone who would desire this over the free alternatives, I guess, is what perplexes me. Well, I mean, a lot of people are talked into doing it. I'm sure they just have it installed and have forgotten about it and it's just there. I just worry that a lot of people will find themselves locked out as of March 23rd. It's a courtesy before they hijack this and turn it into a money-making feature. This is like them being proactive against people who would be really put off and upset after it's... Okay. I thought they needed them to do some work before they could have this feature. Using SMS though to do two-factor authentication, it's the easiest way to do it. It's basically because all you need is your phone and you get a message on your phone saying, hey, is this you? You say, yeah, and that's it. All the people who do that now, if they don't remove that or if they don't pay for Twitter Blue, they will find themselves locked out when they next try to log in. Okay. So a scenario where you're not using a smartphone, this allows you to bypass that. Well, no. It's going to your smartphone. SMS text message is going to your smartphone. Well, it may not necessarily be a smartphone and that could have value. Yeah. I hear the use are really big into the clamshell form factor. They're coming back. Yes. Rob, go ahead. Yeah. For those of you who were using Twitter, I want to say around four or five months ago rather when it changed hands and ownership and the new owner fired a bunch of people, including a bunch of people who worked on two-factor authentication and the two-factor authentication in Twitter broke, leaving a lot of people locked out of their accounts. And it was only people who were able to get in through a login that they happened to have sitting open on a machine somewhere and disable two-factor authentication who could get into Twitter at all at that point. So the new Twitter has already demonstrated that it is unable to keep this functionality going and now they want to charge money for it, which is hilarious. But it's also like you would think it would be in Twitter's interest to maintain the security of its users. But now they want to pay you to lock the front door of your house or they want you to pay to lock the front door of your house effectively. And this is entertaining. Well, the house analogy in regard to a Twitter account is amusing, but I see the point. I definitely do. But what's also crazy here is that this is seen as monumentally stupid, but guess who's emulating it? That's right, the people at Facebook, because now they will allow you to become what they consider verified, legitimate if you pay them. And this is true on Instagram. This is true on Facebook. And it's basically something that Elon Musk started a few months ago when he introduced the Twitter blue. You can get a little blue checkmark if you pay for it and be indistinguishable from those people who are bona fide celebrities or have been confirmed in other ways. But you can completely lie. In fact, people were lying about being Elon Musk and having a blue checkmark and it caused no end of heartache. But now similar things in Facebook, if you pay what I think $15, some outrageous amount, who would pay $15 a month to be on Facebook? I mean, they should be paying us. I'm trying to get off of Facebook. It's really difficult. It's really hard. Gila, go ahead. Well, no, that was what I was going to say, that they are building upon it by doing the same thing, but making you pay more money for it. And I think there are even different price tiers for Facebook versus Instagram, which I found absolutely fascinating. I think Instagram is more expensive, which is fine because Instagram is more annoying and that's another barrier to participation. But I'm baffled by the idea of changing these things to paid services and what that will accomplish, what that will do to the user base. You know, I just want to see my high school friends' kids. That's really all I want. Well, maybe you should call them then, you know. When did we allow these big major companies to control our social lives? You know, it used to be you could use the internet to connect with people on your own terms, email, webpages, even instant messaging. I miss that. Yeah. Remember the days when you could go into IRC and see all your friends, right? Those were the days, right? You can still do that. You can still go on to IRC. In fact, we have IRC.2600.net. You can go there right now and see all kinds of characters. I want to push back on this for one second, though, because I think that either there may be some rational basis for Twitter's decision here. Those SMS text messages for two-factor authentication can actually get expensive at some kind of massive scale that you're doing. But it raises the question, though, that why not just automatically disable this for all of your accounts? I mean, there were people that are not blue check marks that have this enabled. Why not just disable it instead of trying to block people out of their accounts when their session expires after whatever that arbitrary date is? You know, it is much more secure to use some kind of authenticator app. And I think that's what Kyle was driving at, which is that that's the type of authentication that you can perform on a smartphone. If you tie your two-factor authentication code to something like the Google Authenticator app, that's much more secure because your two-factor authentication code is not subject to SIM jacking. If somebody jacks your SIM card, meaning they steal your SIM card or have it reissued to them and then can intercept your text messages, they could then get into your Twitter account. You can't do that with a Google Authenticator app or some kind of software-based application. Last word has to go to Gila. We have to head out. All I was going to say is that they're trying to sell a worthless product to people who want to look smarter. They are selling snake oil to make people feel better about themselves and they're going to make money doing it. And we're the product and yet somehow we're still expected to pay. Amazing. Hey, that's going to do it for us here on this edition of Off The Hook. Again, please support WBAI. Give to WBAI.org or 212-209-2950. Write to us, oth at 2600.com. And if you haven't gotten enough of us, you can tune into YouTube in about eight minutes. Follow the link on the 2600.com webpage or just go to channel 2600 on YouTube. You can participate in Off The Hook over time and call us even and be part of the conversation. Thanks to David Green from the Electronic Frontier Foundation for joining us tonight. Everybody else, we will see you in two weeks. Good night. Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai