Meltdown/Spectre/Comet Conspiracy: Distributed Speculative Computing for Project X

Until now I didn't know CPUs are basically already quantum processors. Like a quantum processor, they compute possibilities that are not strictly required, ahead of time, so they are already known if/when needed.

The meltdown/spectre vulnerabilities are claimed to be flaws in the "speculative execution" workflow that could allow a malicious userspace program to access the system memory contents.

The patches for the flaws so far have been known to reduce performance, because they reduce the ability of the CPUs to speculatively process future possibilities.

Or do they?

What if these patches are merely redirecting the CPU resources to process other things, or process in a different way?

What if the "speculative" possibilities they have been processing all this time were not related to our user space programs (which seriously haven't been going that much faster since the 2000s despite all this speculative execution in our supposedly uber fast CPUs), but were actually being executed as part of a larger distributed processing cluster?

That is, what if all ARM and Intel CPUs in the world were already joined in a wireless supercomputing net, or at least an ad-hoc network that could self-communicate, self-update and sync commands when possible?

And these updates are merely changing that behavior, or fundamentally changing the processing system in some way?

Resources:

– https://en.m.wikipedia.org/wiki/Meltdown_(security_vulnerability)

– https://en.m.wikipedia.org/wiki/Quantum_computing

– https://en.m.wikipedia.org/wiki/Speculative_execution

(precursors to worldwide x86 adoption that partially illustrate the same concept)

– https://en.wikipedia.org/wiki/PowerPC_Reference_Platform

– https://en.wikipedia.org/wiki/Common_Hardware_Reference_Platform

At least for the purpose of a thought experiment, imagine that a CPU could take on speculative execution workloads of nearby devices, wirelessly and seamlessly and agnostically. Your PC might start humming when your iPhone heats up, or vice versa. The CPUs can communicate implicitly, from the hardware level, without any need for OS, software, or wireless radio. That is what they are DESIGNED TO DO, they do it automatically and completely naturally. If you have multiple CPUs nearby one another, they will share load to some degree, at least the speculative execution load, which may not be at all related to actual user applications.

Because everything is related, the late 2017 / early 2018 surge in popularity of cryptocurrencies is also an integral part of this. The application that is distributed to mass speculative computing resources is naturally similar to a blockchain miner.

Posted in Uncategorized | Comments closed

Facebook was Founded in 1993: CIA, Facebook, Microsoft and Planned AI Surveillance

– https://www.reddit.com/r/conspiracy/comments/7ed98v/why_was_the_post_about_facebook_being_founded_in/

– https://www.reddit.com/r/conspiracy/comments/7eaaff/mark_zuckerberg_is_an_actor_a_fraud_the_cia_had/

A non-profit venture capital firm based in Arlington, operated by the CIA:

– https://en.wikipedia.org/wiki/In-Q-Tel

To summarize one of the main premises of the Project X theory, this plan has been in motion since the 1950s, at latest. At that time or even earlier, someone (some agency or agencies) determined that it would be ideal to assign a digital identifier to each citizen for purposes of authoritatively tracking them, like a serial number. This would prevent fraud and crime, and provide a measure of statistical/informational authority to the government for purposes of administering a nation of hundreds of millions of individuals (or, in the case of governing the world, billions of individuals).

  • Was the Social Security Administration and SSN an early, analog implementation of this idea?

There are two main components required to implement this system: the ‘tagging’ of individuals with IDs, and the central ‘mainframe’ system that holds the index of IDs, tracks access, and performs other functions with the data set as needed.

Logically, the infrastructure system (the central mainframe) has to be designed and implemented first, before any tracking can be done or any data can be processed. This led to the creation of IBM (“International Business Machines”) and the first “mainframe computers,” under the auspices of creating databases and processing systems for business applications (which practical application has been maintained and cherished to this day).

After the hardware infrastructure was reasonably implemented, at least in proof-of-concept form throughout the 1960s, 70s, and 80s, the next major component to develop was the software that would be responsible for indexing and tracking the access of millions, if not billions of individual end-users to a central system. This led to the creation of Microsoft and, notably Active Directory which provides Identity Management Services, allegedly for businesses to manage their corporate employees and other users. This was likely a proof-of-concept for a much larger system, which would assign a unique digital identifier to any user of the Internet or computer systems, and track her access throughout time and space.

By the 1990s, the hardware infrastructure and the software to coordinate it was ready, at least in advanced proof-of-concept stage, and the next phase was ready to begin. That phase began circa 1994 with the widespread adoption of the World Wide Web via the web browswer (Mozaic, then Netscape/Mozilla), and continues to today, with web applications in the form of mobile apps running on the same Web framework created in the 1980s and 90s.

The goal of this ‘final phase’ (phase 1: hardware, phase 2: software) is implementation, ideally with the greatest possible number of civilians brought into the “digital ID tracking system” so that their identities, access to content and digital resources, and whereabouts may be tracked in real time. This led to the creation of Google, Facebook, and others, which were conceived only as methods of assigning digital IDs, en masse, that can be tracked across multiple discrete access devices and access locales.

Increasingly, we see “Sign in with Google,” “Sign in with Facebook,” “Sign in with Microsoft,” etc. This is one example, aside from the obvious, which is that a huge amount of daily activity/livelihood has already been consolidated into services provided by Google, Facebook, Microsoft (who own LinkedIn, Xbox, etc in addition to Windows). Every time a user accesses these services, they are providing their “digital ID” for authentication with the central Facebook/Google/Microsoft authentication server, their IP address and whereabouts are recorded, a timestamp is given to the access event, etc. This is common practice for any login system whether it is a fan blog you create yourself, or a cloud service like GMail, etc.

But this is not merely security best practice, this is active surveillance with the goal of collecting information and identifying characteristics of individuals and groups as an intelligence activity. This is the nature of AI, and why the developments of mainframe systems, personal computing, and the World Wide Web have culminated in the current trend of “artificial intelligence” and “cloud computing,” which indeed, have been buzzwords for multiple decades.

> Why are they doing this, what is the ultimate technical goal? Complete and total real-timetracking of everyone, supposedly for pre-emptive identification of national security threats. But is there a bigger, technically-oriented objective? Why has this plan taken the better part of a century, if not many centuries, to implement stage by stage? (Secret Space Program; God AI)

Posted in Uncategorized | Comments closed

Faraday Future FFZERO1 2016 Concept

https://www.ff.com/us/futuresight/ffzero1-a-car-of-concepts/

Posted in 2016, Concept, Faraday Future | Comments closed

1925 Rolls Royce Phantom I Coupe Concept

Modern Concepts based on a Rolls Royce Phantom I Coupe, originally introduced 1925 with a simpler chassis. The body below is a one-off custom created in the 1930s and restored throughout the years to its present form.

 

 

The original (custom) car it was based on:

Posted in 1925, Rolls-Royce | Comments closed

Alfa Romeo 8C

Posted in 1934, Alfa Romeo | Comments closed

1934 Chrysler Airflow

Posted in 1934, Chrysler | Comments closed

Toyota MR2 GRMN 2011 Concept Car

Posted in 2011, Concept, Toyota | Comments closed

2007 Jaguar XK Coupe

Jaguar New XKR. (07/01/06)

Posted in 2007, Jaguar | Comments closed

1956 Jaguar XK140 Roadster

Posted in 1956, Jaguar | Comments closed

1956 BMW 507 Roadster

Posted in 1956, BMW | Comments closed