Downtime this morning

News updates about the Prohashing pool
Forum rules
The News forum is only for updates about the Prohashing pool.

Replies to posts in this forum should be related to the news being announced. If you need support on another issue, please post in the forum related to that topic or seek one of the official support options listed in the top right corner of the forums page or on prohashing.com/about.

For the full list of PROHASHING forums rules, please visit https://prohashing.com/help/prohashing- ... rms-forums.
vinylwasp
Posts: 95
Joined: Mon Oct 31, 2016 3:42 am
Location: Singapore

Re: Downtime this morning

Post by vinylwasp » Tue Jul 25, 2017 10:11 pm

Steve Sokolowski wrote:
vinylwasp wrote:That won't work either as most users will have multiple miners behind a single NAT address per location and doesn't account for hosted miners, and local stratum proxies. Limiting connections won't work either because of the variation in hash power per miner where you could have 1 account with 330 MH/s across 10 Innosilicon A2 Minis and another with 5040 Mh/s with 10 Bitmain L3+s; each with 10 connections. The only way you could attempt to do this is to limit the actual submitted hashrate per registered user, and I suspect that's a lot of low level re-work for a very small gain that can be worked around easily by registering multiple accounts.

What's happening is in effect a Flash Crowd, demand driven Denial of Service event which news outlets, gambling and ticket sites often experience.
The solution to demand is always more resources whether its more hardware or more sites.
This only works when it's possible to parallelize the algorithms. The problem is that coin selection is inherently singlethreaded. One person's assignment is directly influenced by the assignment of all other people.

Our first step right now is to create additional processes to do other things than coin selection, so that the coin selection core can concentrate only on that. There are five different "core components" we hope to separate. The first, which we are working on now, is share processing. Share-processing is computationally intensive because the weights of each miner and the sell price of that coin need to be calculated. However, on August 6 we believe that we will be able to permanently eliminate share processing from ever being an issue again, because we will be able to support many processes doing it simultaneously. We'll be able to just buy more cores if share processing ever becomes a problem again.

Once we solve that, then we will sever block processing next, and then sever coin assignment by algorithm, and then finally sever stratum communication. In each case, we'll be able to support multiple "cores" of each communicating through WAMP.
Great work Steve, by parallelizing the algorithms you are in effect providing more resources by better utilizing the ones you have.
Post Reply