It is important you take a quick look at the overview and then the correct before using the site or you will go wrong !!
It works as a web proxy where our server fetches the web page and serves it to you Within a restricted iframe which intercepts all network requests but with some key differences to others:
Usually WebRTC uses server infrastructure to perform the handshake and connect clients like so:
However the scrambler clients then uses the connections given by the server as paths to exchange the data to establish additional channels.
These new channels are then used to signal new channels. The server is unaware of who is connected to whom and the paths through which requests are routed.
This means that when you use the scrambler program my server has no way of finding out who originated each request as we cannot know which client is connected to which. We can keep and publish logs if we had to. They are useless. The problem is that many proxies or VPN providers can be hacked into or forced to secretly record and hand over logs. If history has taught us anything it is that anything can be leaked. It is better that the data is never even generated in the first place.
When you connect you are given a set of connections as represented in the below diagram where you are the client and the connections are webRTC connections.
The other nodes ( represented as circles ) are other users who are also connected clients. All further connections beyond the initial set of connections in the above graph are created by peers signalling new connections amongst themselves.
This network functions through a reciprocal arrangement where each user also takes roles in the paths of other users. Every user can take any role in the path of any other user. There are various additional webrtc connections then created between peers so that the initial set of connections of a user look like this:
The requests are routed bidirectionally using the onion routing algorithm with the symmetric algorithm AES-CBC used encrypt/decrypt in the chain along the path ( client, relay 1, relay 2 and pseudoExit) with only the last node called the pseudoExit node seeing the request. The last node then fetches the request and sends it back. They encryption keys are given by the server as per the initial diagram.
The reason for the additional connections is so that the network can handle disconnects and reform a path along which to send the requests and responses.
The keys are shared to the nodes visually above and below them so that they can be used should the initial routes fail. This means that three nodes will hold a copy of each key at each time other than the client
The reason that Onion routed p2p networks have not been done fully with the browser before because The key problem faced in this scenario where the whole network is run within a browser is the short lived nature of connections. There are several key differences within this netowrk to other onion networks such as the transitory nature of the nodes and paths.
The clients also run a hiding algorithm which swaps the initial nodes given in the last position nso that the server is unaware of even the initial paths.
If we we start with a set of nodes such as below:
Should node E disconnect then D detects this and tells A to run a rerouting algorithm where it becomes part of the main route and recreates a new spare sharing its key leaving a network looking as so:
The algorithm can be run any number of times. If node A now leaves the network this algorithm will run again resulting in a set of connections represented by the following graph
If we rearrange this we can see that the graph contains all of the connections of our original graph, although with the multiplexing of one connection.
This makes you even more legally ass-covered should you use JOR since the damn exits could be fetching anything ( get and post requests ). If you have made any dodgy request then this brings reasonable doubt into the equation ( it wasn’t me ect ). When your ISP looks at your records it cannot distinguish between yours and anothers! I cannot guarantee they will not figure out how to discern the different type of traffic at some point or legally endorse any illegal actions.
Governments occasionally bother those running TOR exit nodes by mistake. The sheer quantities of traffic you are likely to be serving is a microscopic fraction (consisting of about as many requests on average as you yourself made so) of that served by even the smallest TOR exit node or relay. The cases of Governments accidentally bothering users I believe will be vanishingly rare
So for a wider understanding of WebRTC just google it. It's a technology that allows to browser tab to connect and communicate to each another without the need of a server. To establish the connection a third-party must swap some initial data in a handshake exchange some requisite data. The initial connections created by this appilcation our server does this but then for further connections then the existing connections them self signal the new connections.
Onion routing is a technique for sending messages in which a message is encrypted several times with different keys, and passed through a chain in which a single layer of encryption is decrypted leaving the plain message only in the final node. It is widely used in anonymous online communication systems. A more comprehensive explanation can be found here: https://www.makeuseof.com/tag/what-is-onion-routing-exactly-makeuseof-explains/
It is a security rule which is run in browsers which prevents the fetching of certain types of rescources from different domains than that of the current website the browser is showing. This doesn’t apply within browser add-ons, and doesn’t run when browsers are run within mobile applictaions. It can also be disabled by running a browser extension such as “cors toggle” or by starting the browser from the command line with a disable web security flag but I decided it was easier to recommend a browser add-on.