Alex Jones: Did You Promote The Crowdstrike Conspiracy?

Alex Jones, a conspiracy theorist and far-right media personality, has been on trial in Connecticut for spreading falsehoods about the Sandy Hook school massacre. His Infowars media platform and its assets will be sold off piece by piece in auctions this fall to help pay over $1 billion owed to relatives of the victims. A jury has ordered Jones to pay millions of dollars for spreading lies about the massacre, but his influence in right-wing media and politics remains strong. Jones was ordered to pay more than $45 million in damages to Neil Heslin and Scarlett Lewis, the parents of a 6-year-old who was murdered in the Sandy Hook shooting.

A federal judge has ordered the liquidation of conspiracy theorist Alex Jones’ personal assets, but dismissed his company’s separate bankruptcy case. Jones has been convicted of defamation and ordered to pay $965 million to the families of the 2012 Sandy Hook shooting. The money donated to Infowars goes to fight this fraud, as he said, “The money you donate does not go to these people.

In conclusion, Alex Jones, a far-right conspiracy theorist, has been ordered to pay millions of dollars for spreading lies about the Sandy Hook school massacre and his influence in right-wing media and politics. His Infowars media platform and assets will be sold off piece by piece in auctions this fall to help pay the remaining debts.


📹 CrowdStrike CEO: ‘We know what the issue is’ and are resolving it

George Kurtz, the CEO of cybersecurity company CrowdStrike, joins TODAY to share details on what caused a massive computer …


📹 CrowdStrike Blew Up The Internet

Command to help fix: del “C:\\Windows\\System32\\drivers\\CrowdStrike\\C-00000291*.sys” The CrowdStrike Reddit Thread: …


Alex Jones: Did You Promote The Crowdstrike Conspiracy?
(Image Source: Pixabay.com)

Pramod Shastri

I am Astrologer Pramod Shastri, dedicated to helping people unlock their potential through the ancient wisdom of astrology. Over the years, I have guided clients on career, relationships, and life paths, offering personalized solutions for each individual. With my expertise and profound knowledge, I provide unique insights to help you achieve harmony and success in life.

Address: Sector 8, Panchkula, Hryana, PIN - 134109, India.
Phone: +91 9988051848, +91 9988051818
Email: [email protected]

About me

84 comments

Your email address will not be published. Required fields are marked *

  • I am a Cyber Security professional, this will go down in legend. You never roll out updates without Pre-Deployment testing, that is skydiving without testing your parachute. It is literally federal regulation in the government sector. And you also never roll out updates all at once, you do it in phases to avoid the planet coming to a halt like this. If there’s anything positive, it’s a reminder to all us Cyber Security people about the criticality of our profession and all IT folks in general.

  • The problem is that even if CrowdStrike fixed the issue on their end, the endpoint machines will not receive the updates since those machines are not booting due to the driver issue. Unfortunately for the Companies, they will have to rely on their local IT personnel to fix each and every computers that were affected. So imagine for companies having thousands of machines affected, those have to be fixed manually by the IT guys…

  • I’ve been working in IT for over 20 years, the Cardinal rule is that you never make a change on a Friday. This is what happens when you hire too many contractors that are underpaid lazy and take no accountability. Don’t believe anything this guy is saying, they didn’t properly test this patch I am blown away that they did not have a development environment where this was thoroughly tested before being rolled out so haphazardly.

  • CrowdStrike should not be surprised like this. I mean that literally. They need to slow down the initial rollout of updates and monitor the health status of the updated systems. If they had been monitoring that the initial systems to be updated and detected that they were not coming back online; they could have automatically halted the bad rollout with relatively few systems negatively affected.

  • The CrowdStrike CEO is giving us a master class in paltering; the misuse of facts to tell a lie. His line that they “remediated the issue” means they stopped pushing the toxic patch long after everyone was staring at a BSOD. The fix requires going from machine to machine and manually removing the patch, which is four simple steps, but these machines are also running BitLocker which complicates the fix. A rolling release of patches would have greatly limited the damage. Push a patch to one area, wait an hour or so, then push it region by region. If you’re a remote worker and you’ve got a BSOD on your laptop caused by this issue here’s the fix: 1. Get a box 2. Print a UPS label 3. Enjoy a couple weeks on vacation

  • 1. Not properly tested 2. Deployed on a Friday 3. Workaround was posted on their website that required you to signup and login to see it 4. Saying it only affects Windows as it that was a mitigating factor. Even if your server is Linux, if the DC has enough Windows machines going down it can still affect you (i.e connected services, data bases, power surges, etc.) 5. Deployed on a Friday 6. Deployed on a Friday 7. Deployed on a Friday 8. Deployed on a Friday 9. Deployed on a Friday 10. Deployed on a Friday 11. Deployed on a Friday 12. Deployed on a Friday 13. Deployed on a Friday 14. Deployed on a Friday . . . . 99. Deployed on a Friday

  • Here are my two cents: first, I thought there is a whole panoply of different players in the cybersecurity market, with no one vendor having anything near a monopoly: Crowdstrike wasn’t even a household name until now. So why would an outage at any cybersecurity vendor (which they themselves identified in less than 2 hours) have such a gigantic effect? And secondly, why would a content update cause such a low-level failure that the Windows kernel would fail to boot? I would expect that only a software/engine upgrade would do that, whereas a mere content update should only interfere with some apps – even if it is so bad as to contain random nonsense or to “block everything,” the OS kernel should still run. This points not so much to a poorly-tested content update, in my opinion, but rather, a design flaw in the software’s architecture itself.

  • He’s lying via omission: He said they deployed the fix and that computers are rebooting and coming up and working, this is almost entirely a lie. They are working AFTEr someone goes in and deletes the bad driver they dumped into c:\\windows\\system32\\drivers\\crowdstrike… This will be pursued legally and this article will be used as evidence.

  • The file that was updated was not just a virus definition update as he is implying. They updated an actual executable driver file (with a .sys) extension that requires extensive testing and normally staggered or limited rollouts before being deployed to everyone. He is hiding something and not telling the truth. Were their systems internally compromised?

  • Never deploy a patch or update across the globe in 1 go, you have to do it by batches! Same with production deployment, you dont deploy releases on all production servers at once – do it one by one – test after deploying to one server. This is definitely a process issue or someone didn’t follow the SOP

  • This looks and sounds way to close to the excuses for the Baltimore bridge situation. Cyberwar is real and has been going on behind the scenes for a long time now. Any breach would cause mass panic, people emptying bank accounts, markets crashes, etc. If I was a betting man, I’d say the 2 situations are connected somehow. I think we’ll see more of these situations before the end of the year. It would be a good time to be friendly with neighbors and start community plans, just in case the grid goes down at some point.

  • I know this effin sucks. This was a lot of work to fix, but how much CrowdStrike have saved us in the past this is nothing. Without them, the mess we would have had to clean without their protection would have been much worse. Our bank was just hit by a cyberattack and only recovered after 3 weeks of outage. This only took about 10 minutes per computer, compared to 4 hours a piece. The only reason why it took long was there were a lot of computers and servers affected. Ransomware us a nastier to recover from.

  • Being a software engineer myself, the reporters are asking the wrong questions. News networks should have IT correspondents and experts who know IT better to ask more relevant questions, knowing how huge the impact of IT nowadays. They missed the obvious question. How did an obvious flawed update like this passed their QA??? There must be something more to this….

  • It was irresponsible to deploy an update without proper testing, especially right before the weekend. Although the deployment was actually on Thursday, the individual responsible claimed to have tested it on their device. This highlights a significant issue with Crowdstrike, particularly Tenable, as it suggests a lack of thorough testing protocols and quality control.

  • Why did they NOT test it on their own internal test servers, and why did they release the patch on a Friday? It is on Tuesdays for a reason. It is like the worst mess-up I’ve seen from a software company. That CEO needs to step down, and the whole company needs to be reconfigured to prevent such a mistake from happening again.

  • I dealt with a similar issue by a other vendor, it would crash on windows update. That vendor denied the issue for months. My system team wouldn’t disable the windows update and when i disabled windows update on the computer,.they would revert it. So for months i had to fix the issue on the same 60 computers for months, vsrious hours, going from home to work at random times. No overtime pay

  • Stellar reporting there, Today Show… you had the CEO of the company on the air, yet somehow managed to provide the same info that’s available everywhere else (and, in some cases, still somehow managed to report even less than some other outlets). You should have a least looked like you were trying to get him to answer the questions that he dodged… and definitely should have gotten an answer about what he will be doing to make sure that, in the future, no update can cause a global knock out of all the things this one did. This was an accident… imagine if some effort was taken to purposefully initiate an ″update″ with actual malicious intent – it’s a scary thought to see how unacceptably fragile it all is.

  • This same thing happened to me several months ago and i had to reinstall everyhting on my laptop.. My guess is that the actual affected numbers are way way under reported. It’s BS that these guys and MSFT care about customers. They’re only fixing it because of the massive scale of problems this has caused.

  • It’s a shame that world class infrastructure runs on Microshit. Every organisation is expected to use its own OS that they have designed for their use. Any tech savvy person knows that servers use different Linux Distros, which are open source OSes collectively known as Linux. At the heart of each Linux Distro, they all are same and have a Linux kernel. If you carr about privacy and want to own your computer instead of letting Microsoft control it, you should switch to Linux. Only then you can control every single program on your computer. Also, Linux is free as in freedom.

  • Its like electricity – when the lights are on no one calls the power company to say how great their electricity is but you better believe the moment the power is out people are quick to complain. I feel bad for any IT support desk who was impacted. Also, now that we have a million new IT “experts” voicing their well versed opinions on the internet, we will surely see improvements to company IT staffing levels throughout the industry. Because IT is always fully staffed with tons of support from leadership – /s. Hang on while I try to fix this system from 2009 running XP for the 9th time this week.

  • how are they going to rollback the update if the computer is blue screening in a loop? unless Crowstrike is manually disable/delete through safe mode/command prompt which has not been successful in all computers. we are preparing to re-install windows on each computer. that is going to take time. stop choking, you know this is a big mess!!

  • 30 years in the IT business and still going,, I’ve seen many companies release updates that wreak some sort of havoc on an operating system- usually confinded to that application, but sometimes slowing a PC down to the point where it was barely able to do it’s job. But not since windows 3.1 have I seen an application that isn’t natively part of windows, completely crash the operating system on this scale. The addition of the Virtual Machine in windows 95 was designed to migigate this, so that one application couldn’t completely monopolize the processor or kernel. I think Microsoft bears some responsibility for this too, because the design of the OS should inherently prevent a single 3rd party application from doing this.

  • As a software engineer, I understand their dilemma entirely, especially in cybersecurity, where an engineer is constantly dealing with new and emergent ways to breach systems. There’s cutting-edge technology, and then there’s cybersecurity. What they do is hard, and when they do their job right, it’s typically a thankless job because nobody notices.

  • It’s CI/CD. Big companies release bugs in their software all the time and have to roll back releases. The problem with this one is that the bug put the devices in a state that couldn’t be rolled back. And you are only hearing about it because the software is installed widespread on important infrastructure.

  • Organisations will advertise their change windows/potential downtimes with their clients. Not saying you ‘have to’ go down. Within the org, there’ll be agreement as to the most strategic/safe change windows to use. A lot depends on internal vs external client user base requirements. And of course human resource (skillset) availability.

  • Why the heck did they not test it before sending it worldwide ? Was it deliberate incompetence or real incompetence? the people working for this company need to investigated by our intelligence agencies for potentially nefarious conduct. This amount of power in a single company is a serious threat to democracies.

  • It had a profound butterfly effect on my long waited family vacation. It was painful. It is infuriating. My question as a non tech savvy person is “was this preventable?” If it wasn’t, then that s understandable. I understand things don’t go as planned. But if it were, I think somebody must be held responsible.

  • There is no way crowdstrike can resolve this issue at this point. All affected computers manually need to be remediated. When he says they corrected the issue what he means is they fixed the patch so no further computers are affected. But the millions of conputers that already received the patch will need to be manually fixed.

  • How is it possible that one company makes a mistake and millions suffer. who is accountable? Is there any accountability? Why cannot there be Plan B to fall back upon? Computer companies should look for other companies instead of just one . The result is disastrous. The CEO of the Company should be held responsible for this mess. How many people suffered ? Investigation should be done by an INDEPENDENT Company as to why this outage took place. Thanks

  • Regarding CrowdStrike IT outage continues to cause global disruption: Microsoft allows “Crowdstrike” software to be used in their platform and marks it safe for usage (are they flagged for being a malicious software? No?). Equally responsible. External softwares should never be able to interfere and disrupt the normal operations of the main OS and hold everyone hostage. The fact that Microsoft allows this to happen says one thing. It’s vulnerable and there are no safety net/measures. Apparently, Indian CEO Satya Nadella, did not ensure the reliability/quality of it’s product, eg. doing multiple quality checks or implementing safety measures throughout his 8+ years of tenure as CEO since year 2014. He is reactive but not proactive and fails to foresee this due to lack of foresight. That is 8+ years of negligence and overpaying and uselessness of this CEO.😅🤣😂😁😇🤭Don’t hate me or my comments, hate facts instead.

  • The recent Outage was 100% Microsoft’s fault! When Microsoft transitioned from Windows NT 3.51 to 4.0, they moved the graphics device interface (GDI) from user mode to kernel mode to improve performance. This change made the OS faster but also more vulnerable to crashes caused by third-party software. The BSODs could occur because poorly written or malicious code running in kernel mode could directly impact system stability. The recent issue with CrowdStrike’s Falcon sensor update, which caused widespread BSODs, highlights how third-party kernel mode software can critically affect system stability when something goes wrong. If CrowdStrike’s Falcon sensor had not been running in kernel mode, the operating system might have been able to isolate the faulty update, preventing it from causing widespread BSODs. The faulty CrowdStrike’s could have been rolled back or patched remotely and efficiently. Kernel mode access indeed means that any bug or faulty update in such software can have severe implications on system stability because it operates with highest-level privileges.

  • Sorry to say, but Microsoft has always had problems with its updates in the last two years. All the updates are done automatically and they always have some kind of issue. Microsoft has to really diving deep and find a way to update without making their windows which is globally used by the majority in the world without it crashing. It is Microsoft’s responsibility.

  • So one software company and code can paralyze the whole world of IT system? What will happen again will happen again and again.. can this company be trusted.. what compensation does this company do for all consumers around the world and companies.. trust is broken..will companies and Microsoft band together and find another software company to replace CROWDSTRIKE..

  • Somebody should tell him, “it’s going to be alright”. No loss of life yet. What a challenging thing. I would guess that Microsoft is deeply transparent with Crowdstrike concerning how windows functions, perhaps even making source code available? I think in the end, new policies will come into play, and maybe Microsoft itself will be able to adjust windows so it can always rollback to a functioning version. I have windows 11 on several machines, the reliability and features got me away from linux(which I still admire). Windows 98 is gone.

  • He’s an incredible liar…the system rebooting did not work at all. They just avoided that system that were lucky enough to be offline did not get affected. The solution for the one already affected was 100% manual removing the update in safe mood and reboot. Hundreds of systems in my case, including VMs had to be manually serviced or they would stay on that BSOD. Hackers would have been proud to make a soo incredible job putting all the world on its knees.

  • How is it that nobody’s talking about the fact that this man is blinking SOS There’s a article of a POW soldier from I believe the Korean war blinking the word torture. They filmed a article trying to prove they were not harming the POW’s. It looks exactly like this. He is deliberately blinking the same pattern over and over

  • It is obvious that CrowdStrike subscribes to the “fly by the seat of your pants” IT methodology. If they had tested this update thoroughly in a non-production environment using the different “flavors” of Windows that the CEO says their clients are using they would have identified any incompatibilities before rolling it out. Instead they crossed their fingers and sent it out, hoping that nothing would go wrong. Surprise!! I wonder, did they notify their customers that the update was coming? I’m sure they will find some low level developer to pin this on, an they will be forced to fall on their sword.

  • Mm…sounds to me they were “hacked” and making up stories. You telling me a well established company like this and don’t have a proper deployment protocol ? Big companies run many test to make sure the production product is not affected, even if it’s it usually done during a low traffic time like midnight and it’s things doesn’t work before morning, a full revert is done. Total BS

  • On the question of backups, this isn’t something that should be directed towards Crowdstrike. It’s a question for its users. They could have had backup hardware. They could have had that backup hardware running a competitor’s software. The simplest answer for “why was there no backup” is because no one wants to invest in two systems for critical tasks. They throw all their eggs in one basket and trust it won’t go down. But humans are human and things go down

  • Ordinarily code is tested before it is committed. There is something else going on. The operating system manufacturers should have tested it before allowing any dependencies to be pushed to the stack. Crowdstrike was the point of failure because microsoft allowed them to be but they are not the only one.

  • Bugs are common! Companies IT’s in bank, airlines and so on failed the basics knowledge. Why applying patches without testing before deployment? Recertify all your IT’s 😅😅😅😅, they do not want to take blame for what they have done. Instead, they are blaming the crowdStrike. Where is the snapshots, the previous good state of your infrastructure? You can bring it back online in less than 4 minutes. Where the redundancy of their companies infrastructure?

  • “Always trying one step of the adversary” so get Microsoft to provide you with the update first then you test it over the weekend, on your test bed computers which mimic the myriad of systems used by your clients. MS updates are notorious for bugs that will cause BSOD all it takes is one bad update to bring down an entire network. Retired last year from the IT department of a school district and it would happen to us from time to time because we had no test computers. This is inexcusable on your part you should have a fully developed testing department.

  • Let that be a lesson to all IT dev managers, delivery managers, developers who don’t value Software Test Engineering. I’ve heard it all in my career from being called test monkey to other names. Always getting push back from developers when trying to do the right thing. Always have to hear ” It worked on my laptop” Changes normally get validated in multiple environments before releasing into production. Interesting case here.

  • A this was a normal thing since Windows 95. Usually a RAM overload have to reinstall from backup external hard drive. This is exactly why I switched to Apple in 2002 because it works. This is nothing new but. I can virtually run Windows 11 on MacOS running all them stupid programs if something glitched because it always does with windows I can reBoot in 30 seconds. AppleOS and Linux is way better. Even with Microsoft Xbox 360 I remember the Red ring of death breaking many systems because this is a normal thing for Microsoft. Not Apple

  • Does anyone know why antivirus software are implemented as kernel device drivers? Any kernel code can take down the whole system as we’ve seen. You would think Microsoft would add specific system calls to their kernel specifically for antivirus software to avoid this situation. A bigger question would be, why are these systems using Windows at all?!?

  • So the kernel driver for their EDR was full of nulls. Some of the analysts that have spoke on this issue dont say more than “thats weird”. Most agree that it wasnt an automatic kernel update generation, but a manual one and dont think it was AI generated either so its possibly “corrupt” driver update that was pushed out.

  • This is IT incompetence on the behave of comonaies that use these cloud services instead of building out their own systems and infrastructure. And who would use a Microsoft product…muse linux. This is basic engineering. Using others people’s services running on other companies computers isn’t engineering.

  • sent an update, on a friday, didnt test, no regard for customers, literally ‘god mode’ access to key infrastructure. this is a wake up call for people who put their trust in these companies and check boxes to be ‘compliant’ .. bottom line is that unless your own people are taking your systems, services and customer data seriously, no one is.

  • Shouldn’t they be responsible for the damage done? Shouldn’t there be a Crowdstrike class action lawsuit? Who gets to say our bad and move on? Just the airline industry alone suffered billions in losses not to mention those people in an emergency situation that could not get the help they needed. Some may have even died because of this.

  • Being IT and division was responsible for pushing software updates. One of things we learn is to pick a group to test it. We woudl do the update see of amy negative results if any fix it. Test it again until we fell its done. Then pushit out to everyone. It blows me away that a company that is responsible for the worlds security would not do something like this. It show their arrogance and lack of testing to it out because they want stay ahead of the bad guy. Just image the dollar amount in damage and hopefully no one is killled in the hospitals that are recovering.

  • Paper and pencil were the saviors of the night last night at our local 911 dispatch center. Medical calls, law enforcement calls, an oil spill, normally not a problem for a couple of overnight dispatchers to handle with Computer Aided Dispatch software, but it became instant hair-pulling frustration when suddenly everything had to be tracked on paper.

  • Boss: Team, I told Crowdstrike we would be ready to roll out the update before the weekend. Dev 1: No way, we need to test it. Boss: Don’t you want the weekend off? Roll it out immediately. Dev 1: But it isn’t… Boss: Dev2, roll out the update, Dev 1said it was ready. Dev 2: Okay boss. Chaos ensues. Boss: I told you guys we needed to test before rolling out the patch!

  • I manage a Starbucks and live in a city of 100,000 people. I walked in at 5am saw the blue screen on our POS said “Huh..” unplugged it then plugged it back in and it worked. Our corporate called my boss and asked how our Starbucks is the only one up.. How was I the only Starbucks in a district of a major city that figured out. “Did you try turning it off and back on again?” Corporate should be paying me a major salary…

  • Down Under (Southern Hemisphere) all airports in chaos. Handwritten boarding passes, cancelled flights, handing out bottled water to stranded passengers. Broadcast TV networks running with reduced facility. Emergency services still reportedly fine. Bit of a mess. So….this high-tech “cashless society” we keep hearing about…maybe not such a good idea….

  • At my doctors office all the computers went down and they couldn’t do my prescriptions. It happened in the middle of my doctors appointment, if it had happened earlier I would not have been seen at all it would have been another week before I could get seen again and I wouldn’t have gotten the antibiotics I needed.

  • FYI: for thoes that aren’t from Australia, the big thing about that “supermarkets go cash only” headline is that the banks and supermarkets have been trying to push everyone to digital economy for a while now. They even went so far as to cause issues with one of the big armored car companies. They want to force transaction fees on every one and to snoop on private exchanges etc.

  • Friday. What a prefect day to deploy to the production (and customers)… Or was it is a case of “it is not yet Friday, it is 11:50pm on Thursday. Lets deploy!” … 25+ years on IT and software development has teached one thing, Friday is like Sunday, you go out and pray that systems will survive till the Monday morning and you do not touch anything. Except if there is a full fire on the environment… 😛

  • This isn’t an Internet-related issue. It’s worse. It’s the physical infrastructure. If airports, banks, etc. went offline, it would be a massive problem, but they usually have plans for that. However, they lost access to their computers completely, never mind the Internet access. This is precisely why most important systems run in complete isolation, “air-gapped”, with very manual software update processes. I know whenever we needed to deploy an update of our software to such environments, it was a tremendous hassle, with endless safety procedures and integrations tests along the way.

  • I am a boots on the ground manager for one of the largest First Aid and Sefety companies in the country in the corprate US National Distribution Center Warehouse. We do more than 1B dollars a year in just our center to all of our branches. An outage that only cost us about a half a day of work on a Friday (A BIG ship day) is estimated to have cost just our distribution center mllions in business and re-picks for the year due to corrupted shipping information and inability to coordinate work digitally across a warehouse the size of FOUR Costcos. That and half of our corprate wing was out of state for the week at a big summit and they all had flights back Friday morning so… they got stuck (LOL).

  • I work third shift at a hospital and was at a code blue during the outage. We literally couldn’t access the patient’s CT images to see if they had a perforated bowel that had incited the cardiac arrest. Not to mention the hospital’s critical systems were crippled most of the night further delaying patient care.

  • That also should be reason for ALL those companies to be scrutinised: Why did critical infrastructures like hospitals roll out updates on all machines without any verification!?!? If your system is so vulnerable that single bug is destroying your entire IT system – yeah you got other problems. And we can have the same thing happening if just a single wrong javascript-library goes wrong. There are thousands of tiny packages basically nobody knows that they exist yet millions of websites rely on their exact (buggy) behaviour. But seriously – a single update any many different industries are completely down, all at the same time, cause NONE of them had ANY checks or validations for their systems, none of them have any redundancy or recovery-plan. That is just pathetic.

  • Canada just woke up Friday, July 19, 2024 to businesses shut down, grocery stores shut down, vending machines not working, and basically time travelling back to the year 1870. BTW, gasoline and diesel stations are not working, airlines are shut down as well. Hoping to have internet services up and running by 12:00pm EST.

  • I am in Perth western Australia. Yes happened to me. I work in an office and the whole thing went down inculding the phones etc. The computers were also making really creepy sounds 😮Its actually so funny to see how everything and everyone just falls apart as soon as anything like this happens 😂 ridiculous. Our work finally sent us home after waiting 2 hours and realising it wasn’t going to be fixed by their own IT department 😂

  • This happened all the time at a previous company. The IT folks would force an update without fully testing it, then it would blow up, and then IT gets overtime hours and busy-looking tickets from people screaming for their help. Testing and scenario planning is tedious and boring, but it has to be done. It also takes people who have sharp imaginations…as in, what could go wrong, and let’s test for that.

  • i’m from South East Asia and am working for an American corporate company from the automotive industry and the BSOD issues occured to us at aroung 12.30NN which is around 5hrs ago as of writing. The management then dicided that we will leave the office earlier (3PM our time) as we would have nothing to do for us working on a computer. This really affected the global community LOL

  • I’m currently bringing our azure env back up using the fix crowdstrike provided. So far, it is working but taking forever. What really sucks is that you need physical access to the machine to resolve. Workaround Steps: Boot Windows into Safe Mode or the Windows Recovery Environment Navigate to the C:\\Windows\\System32\\drivers\\CrowdStrike directory Locate the file matching “C-00000291*.sys”, and delete it. Boot the host normally. Latest Updates 2024-07-19 05:30 AM UTC | Tech Alert Published. 2024-07-19 06:30 AM UTC | Updated and added workaround details.

  • Nice post, I just got home from my second job (work in Bar/Entertainment establishment) and the credit card processing ground to a halt completely just about 2 hours ago. Between 12am and 1am mountain time. – Dreading getting up for work tomorrow, as an employee of a major telecom provider. On a friday this blows up. LOL

  • CrowdStrike is actively working with customers affected by a flaw found in a single content update for Windows hosts. Mac and Linux hosts are unaffected. This is not a security incident or cyber attack. The problem is separated, isolated and a change occurs. We direct customers to the support portal for the latest updates and they can make complete and ongoing updates to their website. Also ensuring that they communicate with CrowdStrike representatives through official websites. Our team is fully mobilized to ensure the efficiency and stability of CrowdStrike. 🤦‍♂

  • My buddy that works at the state police in Maryland said that systems are down all over the state. My brother at an MSP said there are hundreds of Azure VMs he has to manually fix by detaching the disks, attaching to a working VM, deleting files, then re attaching the disks… I might just call out of work today

  • Living in north mexico really close to the border, my aunt came to visit (she lives in the US) and when she tried to cross back into the US, all of their CBP systems were messed up, so no one would be crossing anytime soon, she had to stay with us for the night, but it was obviously related to this. This thing is highly disruptive for a lot of folks around here, the queue to cross has been massive today since it’s all of the people that were meant to be back already, it’s pretty insane.

  • So.. I work in IT, and we use crowdstrike.. today was not fun.. And the coming days won’t be fun either.. We have 700 machines with crowdstrike and at least 80% got affected including our DC’s… We deployed everyone, including the CTO to go physically to machines to fix them and to guide workers to do the fix themselves And after 13 work hours we are not done…

  • I am not personally affected as I have a MacBook. However, my roomie’s work laptop (PC, obvi) has been stuck on the BSOD for hours. I have friends in Australia who were the first to make note of it. They’re all running CrowdStrike and are super frustrated. I feel for the IT people and those who need it for essential business to function. Y2K really struck 24 years later…

  • That’s the problem with updates going straight to the cloud. If thorough testing isn’t done before deployment, everyone gets affected. Unlike managing patches internally, which allows for better control. This situation shows it’s still best to manage patches internally, as CrowdStrike doesn’t seem to support this approach.

  • Reminder: In case you missed it, this “glitch” was actually a brilliantly orchestrated rehearsal to test our systems for future glоbаl wаr distributions. And who else could be behind such a masterstroke? None other than the covert masterminds at the CIA/MOSSAD. Truly, their subtlety and foresight deserve a standing ovation.

  • Do you think I could become rich if I make a software tool that blocks crowd strike updates and sell it as a kind of “anti-bsod”-tool? 100% satisfaction is guaranteed or the customer gets his money back… Must be a lot of clients willing to pay for that, when we see how hard airports, banks, hotels, etc has been struck and how much money they’re losing now…

  • The fallout from this has been pretty disruptive at the company I work for. We were nearly locked out of all of our servers and workstations without recourse because of some of our heightened security measures. It’s not like a day when AWS goes down and then comes back up again. We’ll be tracking down affected systems for some time to come.

  • Let me give some perspective to the level of headache this is from a corporate IT team perspective. Our company has 10k servers across the globe – all down. Last night MS reached out and our limited IT team last night (it’s Saturday in India) could only bring up 250 something servers ! A lot of them we couldn’t even log into ! And it was saturday night !

  • One of the many, many downsides to everyone and everything having common, connected components, especially when those components have forced updates. If CrowdStrike has caused this, imagine if a bad actor (or an incompetent actor) gets into Microsoft. Does every computer really need to be on Windows? Does everything really need to be a computer?

  • This is exactly why I never allow my portable devices, apps and computer operating systems to install updates automatically. I turn those settings off on every new device. Unless there is a major security hole that needs to be patched it is unwise to install updates that haven’t been tested in the real world.

  • Russia and China have avoided the Crowd Strike disruption, and it is naive to suggest that an update to protection systems can disable these servers instantly. There is a protocol applied to operating systems whenever any software or update is implemented before its release. This is considered a highly professional maneuver.

  • Hey John, what are your thoughts on this being a simulation of a global threat actor? Naturally this would be an “unethical” penetration test of sorts, which would have had to have global coordination, but I think this is the most likely scenario. Back around 2011, I am sure it was identified that there would come a time when global critical infrastructure would face a legitimate threat. Maybe this was a very long plan to embed Crowdstrike software into the global critical infrastructure, so that it could eventually test out this scenario. The data gathered from the response would be EXTREMELY valuable to governments.

  • Here at UPS we had almost most of our buildings (~60%) halt to issue out fixes. We were able to get our technicians to start deleting the files off our servers to get the buildings up and running but 8 hours of downtime nationwide. The solution gives us a fix for a root cause we’re still addressing the symptoms of the crashes such as lost data/corrupted files we had employees lose complete access to their C: drive removed the file type complete to RAW fs. Crazy to see the world wide impact.

  • Yeah, I needed to login to a clients remote server that got “crowdstruck” to do my work. Didn’t boot. Only updated to the bad version. These bad updates seemed to have been pushed early in the morning. Luckily there was a workaround available during the morning. It involved deleting some files after booting into safe mode.

  • We use this software, and got an email this morning about it affecting our company’s servers. Some of the onsite networks are dead and half of the corporate website links are nonfunctional (including the payroll system, oops!!!). Some of the sales and customer service teams are getting to leave early today because they can’t even log in. The IT staff is getting yelled at by management even though this is out of their control. Just a complete clusterfuck. XD

Pin It on Pinterest

We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept
Privacy Policy