It is important you take a quick look at the overview and then the correct before using the site or you will go wrong !!
There are two free programs on this website. The first is the scrambler which is a web proxy where the server is both unaware to whom it is serving each page.
The second is an onion network where every single part of the network runs in the browser (other than initial peer finding, but peers then are swapped randomly between other peers using existing connections to signal new peers) including the relays and exit nodes, and stochastic scrambling algorithms. The Onion network operates as a flash network where nodes are transitory and keys are shared with spare paths meaning it can survive as an onion network in an environment where peers may only exist for mere seconds, as the browser. Other onion networks may be able to be used from a browser but the entire thing of this one bar initial peer discovery runs within a chrome browser. It runs as a reciprocal network and almost pure P2P network where each user takes other roles for other users as relays and exit nodes. The server only runs to block bad peers and moniter their actions. It also functions as a web proxy within the browser for the user as with the first program.
It works as a web proxy where our server fetches the web page and serves it to you Within a restricted iframe which intercepts all network requests but with some key differences to others:
Usually WebRTC uses server infrastructure to perform the handshake and connect clients like so:
However the scrambler clients then uses the connections given by the server as paths to exchange the data to establish additional channels.
These new channels are then used to signal new channels. The server is unaware of who is connected to whom and the paths through which requests are routed.
This means that when you use the scrambler program my server has no way of finding out who originated each request as we cannot know which client is connected to which as the algorithms are stochastic. We can keep and publish logs if we had to. They are useless. The problem is that many proxies or VPN providers can be hacked into or forced to secretly record and hand over logs. If history has taught us anything it is that anything can be leaked. It is better that the data is never even generated in the first place.
Issues hosts suffer from such as being forced to secretly or openly maintain logs by law or even being dishonest or being hacked is not an issue since the server itself is provably unable to determine the final destination of any request yet this is done without any violation of any law. We thus will generate and pubish our logs from the scrambler since this will prove our program to be working properly
When you connect you are given a set of connections as represented in the below diagram where you are the client and the connections are webRTC connections.
The other nodes ( represented as circles ) are other users who are also connected clients. All further connections beyond the initial set of connections in the above graph are created by peers signalling new connections amongst themselves.
This network functions through a reciprocal arrangement where each user also takes roles in the paths of other users. Every user can take any role in the path of any other user. There are various additional webrtc connections then created between peers so that the initial set of connections of a user look like this:
The requests are routed bidirectionally using the onion routing algorithm with the symmetric algorithm AES-CBC used encrypt/decrypt in the chain along the path ( client, relay 1, relay 2 and pseudoExit) with only the last node called the pseudoExit node seeing the request. The last node then fetches the request and sends it back. They encryption keys are given by the server as per the initial diagram.
The reason for the additional connections is so that the network can handle disconnects and reform a path along which to send the requests and responses.
The keys are shared to the nodes visually above and below them so that they can be used should the initial routes fail. This means that three nodes will hold a copy of each key at each time other than the client
The reason that Onion routed p2p networks have not been done fully with the browser before because The key problem faced in this scenario where the whole network is run within a browser is the short lived nature of connections. There are several key differences within this netowrk to other onion networks such as the transitory nature of the nodes and paths.
The clients also run a hiding algorithm which swaps the initial nodes given in the last position nso that the server is unaware of even the initial paths.
If we we start with a set of nodes such as below:
Should node E disconnect then D detects this and tells A to run a rerouting algorithm where it becomes part of the main route and recreates a new spare sharing its key leaving a network looking as so:
The algorithm can be run any number of times. If node A now leaves the network this algorithm will run again resulting in a set of connections represented by the following graph
If we rearrange this we can see that the graph contains all of the connections of our original graph, although with the multiplexing of one connection.
Governments occasionally bother those running TOR exit nodes by mistake. The sheer quantities of traffic you are likely to be serving is a microscopic fraction (consisting of about as many requests on average as you yourself made so) of that served by even the smallest TOR exit node or relay. The cases of Governments accidentally bothering users I believe will be vanishingly rare
So for a wider understanding of WebRTC just google it. It's a technology that allows to browser tab to connect and communicate to each another without the need of a server. To establish the connection a third-party must swap some initial data in a handshake exchange some requisite data. The initial connections created by this appilcation our server does this but then for further connections then the existing connections them self signal the new connections.
Onion routing is a technique for sending messages in which a message is encrypted several times with different keys, and passed through a chain in which a single layer of encryption is decrypted leaving the plain message only in the final node. It is widely used in anonymous online communication systems. A more comprehensive explanation can be found here: https://www.makeuseof.com/tag/what-is-onion-routing-exactly-makeuseof-explains/
It is a security rule which is run in browsers which prevents the fetching of certain types of rescources from different domains than that of the current website the browser is showing. This doesn’t apply within browser add-ons, and doesn’t run when browsers are run within mobile applictaions. It can also be disabled by running a browser extension such as “cors toggle” or by starting the browser from the command line with a disable web security flag but I decided it was easier to recommend a browser add-on.