Server overload plans

News updates about the Prohashing pool
Forum rules
The News forum is only for updates about the Prohashing pool.

Replies to posts in this forum should be related to the news being announced. If you need support on another issue, please post in the forum related to that topic or seek one of the official support options listed in the top right corner of the forums page or on prohashing.com/about.

For the full list of PROHASHING forums rules, please visit https://prohashing.com/help/prohashing- ... rms-forums.
User avatar
Steve Sokolowski
Posts: 4585
Joined: Wed Aug 27, 2014 3:27 pm
Location: State College, PA

Server overload plans

Post by Steve Sokolowski » Thu Jun 08, 2017 8:09 am

Unfortunately, we've encountered more performance issues, this time with the database server. We think that the cause of the problem is that there are too many shares being inserted. The insertions are overloading the RAID 5 array which was never intended to hold share data, but which Chris had to move the data to because the fast disks had reached 99.5% capacity.

I think we can keep mining continuing while we address the issue, and at present I seem to have been able to stabilize the system at a five minute delay. Here's what I changed:
  • I raised the pool's minimum, maximum, and starting difficulties so that less shares are submitted
  • I removed columns for unnecessary data, like ip address of the share submitter and hash of the block
I don't see our being able to get a permanent solution, either in hardware or software, before Saturday. The software solution involves a significant change to reduce the number of shares recorded in the database. The hardware solution is to buy those 100,000 IOPS disks and change the array to RAID 1+0, which we don't prefer because it would result in a day of downtime to copy the data. Another solution is to vacuum full the shares table and move it back to the fast disks, which will also result in downtime.

In the meantime, ignore the balance and hashrate charts except at the end of the day, when Chris will update them for the previous day's mining. We are also considering closing the site to new registrations to assist in the overload. WAMP is spotty and unreliable because we're getting an "exhausted open file descriptors" error on that server now, and we don't know whether that is coincidental or related. As long as the mining server is online and receiving shares, then money is being earned, even if the other parts of the system are running behind.
User avatar
FRISKIE
Posts: 117
Joined: Sun Apr 16, 2017 12:51 pm

Re: Server overload plans

Post by FRISKIE » Thu Jun 08, 2017 8:32 am

I would support closing to new registrations until the capacity issues are resolved.

This is better for Prohashing and the new customer, as currently their first experience is often a bad one, which can give the wrong idea that Prohashing is not a great pool.

Worse, if they leave here and spread that first impression by posting comments (anywhere) Prohashing reputation is damaged.

I know you guys are working your tails off to make a great site and profitable user experience, but you'll need to get the capacity issues under control, and closing the site to new entrants will help (massively) with that.
User avatar
Steve Sokolowski
Posts: 4585
Joined: Wed Aug 27, 2014 3:27 pm
Location: State College, PA

Re: Server overload plans

Post by Steve Sokolowski » Thu Jun 08, 2017 8:43 am

I wanted to add that the gaps will be sporadic - the database inserter is likely to fall behind 20 minutes, crash, then restart and insert shares for another hour, have a 20-minute gap, and so on.

A complicating factor to this is that NovaExchange and other exchanges are going offline because they too are having capacity issues.

Chris will calculate and correct the gaps at the end of the day.
Last edited by Steve Sokolowski on Thu Jun 08, 2017 8:50 am, edited 1 time in total.
dronKZ
Posts: 41
Joined: Fri Sep 09, 2016 3:45 am

Re: Server overload plans

Post by dronKZ » Thu Jun 08, 2017 8:48 am

Steve Sokolowski wrote:I wanted to add that the gaps will be sporadic - the database inserter is likely to fall behind 20 minutes, crash, then restart and insert shares for another hour, have a 20-minute gap, and so on.

Chris will calculate and correct the gaps at the end of the day.

раньше сбоев было меньше, последнее время отключение происходит ежедневно!!! или сделайте что-нибуть или пора уже делать пиар вашего сайта как не платящий!!!
mjgraham
Posts: 16
Joined: Mon Oct 31, 2016 4:24 pm

Re: Server overload plans

Post by mjgraham » Thu Jun 08, 2017 9:06 am

I am sure you all are working hard to solve this, I would like to ask is the profitability data even valid any more, even on the web site? I see it updating but the WAMP connection says it is behind and keeps climbing. I normally run two operations on here one with just my miners which I don't care to keep them just running away, I did increase the static diff just to help out if it would any and where I run my Nicehash part which I am going to stop until things get figured out.

I would say there is going to have to be some downtime no matter what and I am OK with this as long as it fixes the issue, it seems everything pretty much depends on the share database and processing that data which I am sure is massive. I don't what the best plan for storage is but not RAID5 for sure, I would say plan on 2~3x more than you need at the moment RAID 1+0 SSDs whatever kind of thing.

I have to agree close the site registrations down for now, just so they dont have a bad experience like FRISKIE said, I cant imagine you get 100s a day but maybe you do.

Normally I try to run with as low a diff as possible(not to low) just to maximize hash rate, if I am going to have a HW error I would rather waste a 2048 diff vs a 16k but I can see where that is also 8x the shares per second on average so I bumped the static diff up a bit and will see how it effects my mining, plus getting as many shares in per coin between changes helps.

All in all you guys are doing a good job, yea it is frustrating for everyone but I would like to say thanks for all the hard work.
User avatar
Steve Sokolowski
Posts: 4585
Joined: Wed Aug 27, 2014 3:27 pm
Location: State College, PA

Re: Server overload plans

Post by Steve Sokolowski » Thu Jun 08, 2017 9:29 am

mjgraham wrote:I am sure you all are working hard to solve this, I would like to ask is the profitability data even valid any more, even on the web site? I see it updating but the WAMP connection says it is behind and keeps climbing. I normally run two operations on here one with just my miners which I don't care to keep them just running away, I did increase the static diff just to help out if it would any and where I run my Nicehash part which I am going to stop until things get figured out.

I would say there is going to have to be some downtime no matter what and I am OK with this as long as it fixes the issue, it seems everything pretty much depends on the share database and processing that data which I am sure is massive. I don't what the best plan for storage is but not RAID5 for sure, I would say plan on 2~3x more than you need at the moment RAID 1+0 SSDs whatever kind of thing.

I have to agree close the site registrations down for now, just so they dont have a bad experience like FRISKIE said, I cant imagine you get 100s a day but maybe you do.

Normally I try to run with as low a diff as possible(not to low) just to maximize hash rate, if I am going to have a HW error I would rather waste a 2048 diff vs a 16k but I can see where that is also 8x the shares per second on average so I bumped the static diff up a bit and will see how it effects my mining, plus getting as many shares in per coin between changes helps.

All in all you guys are doing a good job, yea it is frustrating for everyone but I would like to say thanks for all the hard work.
The profitability data is valid at the timestamp in the WAMP data. If the timestamp is recent, then it is more likely to represent current conditions than if the timestamp is old.

The only known bug in the system right now is simply that when hashrate gets > 550GH/s, the database server can't insert shares. That causes a cascade of effects downstream that look worse than is actually happening. There aren't any problems with mining itself, but the user interface can make it appear as if the system is broken because that's what I designed it to do - keep mining online at all costs and deprioritize things that are less important.
GregoryGHarding
Posts: 646
Joined: Sun Apr 16, 2017 3:01 pm

Re: Server overload plans

Post by GregoryGHarding » Thu Jun 08, 2017 10:11 am

I recommend a temporary solution would be to close site to new registrations, and limit the hashrate to 500gh/s, disconnecting the newest users until <500gh/s limit is reached. It sounds harsh, but if other websites are completely coming offline until they can fix the issue, a little regulation of hashrate isn't such a big issue in the long run while you damage control and access a fix.
olkah
Posts: 58
Joined: Fri Jan 27, 2017 9:36 pm

Re: Server overload plans

Post by olkah » Thu Jun 08, 2017 10:56 am

@ dronKZ Поддерживаю !!!.
User avatar
FRISKIE
Posts: 117
Joined: Sun Apr 16, 2017 12:51 pm

Re: Server overload plans

Post by FRISKIE » Thu Jun 08, 2017 10:59 am

I recommend a temporary solution would be to close site to new registrations, and limit the hashrate to 500gh/s, disconnecting the newest users until <500gh/s limit is reached. It sounds harsh, but if other websites are completely coming offline until they can fix the issue, a little regulation of hashrate isn't such a big issue in the long run while you damage control and access a fix.
If that's what it takes to get capacity and performance issues under control long enough to implement a final fix, I would also be in favor.
tmopar
Posts: 60
Joined: Sun Apr 16, 2017 1:50 pm

Re: Server overload plans

Post by tmopar » Thu Jun 08, 2017 1:12 pm

You need more servers, thats the real answer. A master and slave arrangement for the SQL, where you can have even numbered clients hitting server X, odd's hitting Y and the master Z is the one to whom they all update and the one from which all important decisions and calculations are made with the caveat that it will be slightly out of date.

Another option would be just getting rid of the SQL as the primary storage method in favor of a more spartan approach. If you really just need hashes, there are easier ways to do it. You could use a filesystem model which is very robust and simple and much faster with less overhead.

Then you can do an operation to create the SQL periodically from a snapshot of the filesystem for your reporting purposes that is not a bottleneck on the main mining operation.
Post Reply