Cobblestones


Advanced search

Message boards : Number crunching : Cobblestones

Sort
Author Message
Profile Abel
Forum moderator
Project administrator
Project developer
Project tester
Avatar

Joined: Sep 15 06
Posts: 30
ID: 108
Credit: 167,489
RAC: 0
Message 4011 - Posted 6 Jun 2008 16:21:02 UTC

Obviously there are different ways to grant credit to you guys. It has been long debated on how that should be done.

One method takes into account time, it you take more time on a workunit, you get more credit. This however gives advantages to slow machines. They are doing less work yet getting more credit.

The other approach is FLOPS. you are granted credit based on total FLOPS of an application. This would favor fast machines over slow ones, as they have a higher throughput.

There is also a hybrid of the two, basing it on the speed of your machine, the total FLOPS and the time taken.

Personally I think throughput is the bottom line. And credit should be assigned by the size of the workunit in FLOPS. But in the spirit of democracy we will elect a method to compute points.

Here is how it will go. In the first round every member is entitled to propose how the credits will be granted.

We will then take a vote on the one you guys like the best. We well then use that scheme in assigning credits.

Remember that the proposals should be for what to base the credit on and not how much credit to give.

For example:

Credits = total flops / flop rate

is ok.

But

Credits = 1 gazzilion per work unit

is not ok.

Hope this works out, have fun. Proposals will be open until Sunday night, we will vote on Monday.

Rene
Volunteer tester
Avatar

Joined: Oct 2 06
Posts: 121
ID: 160
Credit: 109,415
RAC: 0
Message 4013 - Posted 6 Jun 2008 18:53:31 UTC

Well... I will be the first to kick off here.

My vote goes to the Flops, because in my eyes this is the most "honest" approach. A quorum off 3 would be nice.
It provides credits based on the work done.

Static credit would be my second to best... and isn't that bad while being in "alpha" stage. A point extra would be to keep the credits/hour in pace with the other projects.

To be honest... let's not fall into the pitthole of credits based on benchmarks.
Or worse a combination of bechmarks with a quorum of one.
(BOINC) History has had it's flame wars based on various ways to "spoof" credits based on such an approach. I would hate to see a fine project like Docking going to wast.

Just my 2 cents... ;-)


____________

Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 4021 - Posted 6 Jun 2008 23:07:29 UTC

I'm quite with Rene.

I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation).
Flops should be the next best thing, not much behind.

Benches should only come with a quorum, which itself should only come if necessary for scientific purposes.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile suguruhirahara
Forum moderator
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 282
ID: 15
Credit: 56,614
RAC: 0
Message 4036 - Posted 7 Jun 2008 20:14:41 UTC

I support the same way of applying FLOPS as the project had taken before.

suguruhirahara
____________

I'm a volunteer participant; my views are not necessarily those of Docking@Home or its participating institutions.

Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 4037 - Posted 9 Jun 2008 12:09:41 UTC
Last modified: 9 Jun 2008 12:11:36 UTC

> Have been away a few days.
I am probably too late for this vote and only three other people have voted.

The amount of FLOPS for a certain WU type, I assume will be a certain amount, no matter how long it takes to actually do that calculation.
Therefore FLOPS should give a fairly true amount for the effective work done on a WU type.
Of course the faster the machine the more work units can be done in a day an so the actual amount of points/credit/cobblestones that computer can earn will be more than a much slower one.

Benchmarks are so flawed as to not have any meaning, so I vote benchmarking out.

Set credit at the server is OK but you would need a reference machine to arrive at your base, as this can then give like machines to the reference machine an advantage over other cpu types and OS types, again I vote this one out if based on a reference machine.
If done via calculations to see what should be gotten, then this type of set credit would be OK and I would vote for it.

But overall FLOPS seems to be a more reasonable way to get points for the amount of work done.

(Now the amount awarded for each FLOP, well that will be another issue and another thread, and I bet far more responsive and possibly heated to this thread)
____________

zombie67 [MM]
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 207
ID: 114
Credit: 2,817,648
RAC: 0
Message 4039 - Posted 9 Jun 2008 14:25:59 UTC - in response to Message ID 4021 .

I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation).
Flops should be the next best thing, not much behind.

Benches should only come with a quorum, which itself should only come if necessary for scientific purposes.


+1

I also want to emphasize that increasing the quorum should be done only if the science requires it. Increasing the quorum purely for credit management is a horrible waste of resources. There's always a better way.
____________
Dublin, CA
Team SETI.USA
Profile Cori
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 161
ID: 90
Credit: 5,817
RAC: 0
Message 4041 - Posted 9 Jun 2008 16:01:23 UTC - in response to Message ID 4039 .

I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation).
Flops should be the next best thing, not much behind.

Benches should only come with a quorum, which itself should only come if necessary for scientific purposes.


+1

I also want to emphasize that increasing the quorum should be done only if the science requires it. Increasing the quorum purely for credit management is a horrible waste of resources. There's always a better way.



I agree with you both! ;-)))
____________
Bribe me with Lasagna!! :-)
Profile adrianxw
Volunteer tester
Avatar

Joined: Dec 30 06
Posts: 164
ID: 343
Credit: 1,669,741
RAC: 0
Message 4042 - Posted 9 Jun 2008 16:11:37 UTC

I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation).
Flops should be the next best thing, not much behind.

Benches should only come with a quorum, which itself should only come if necessary for scientific purposes.



+1

I also want to emphasize that increasing the quorum should be done only if the science requires it. Increasing the quorum purely for credit management is a horrible waste of resources. There's always a better way.



I also agree with both.

____________
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
Memo
Forum moderator
Project developer
Project tester

Joined: Sep 13 06
Posts: 88
ID: 14
Credit: 1,666,392
RAC: 0
Message 4045 - Posted 10 Jun 2008 5:40:15 UTC

If it is not too late here are my 2 cents...

If WUs are about the same size I think is better to have a static credit per WU. This gives little more work to the admins but I think at the end both volunteers and admins will be happier. Plus from my past experience with charmm being so crazy at times I remember Andre was able to control credit better this way, that is give similar credit than other projects.

Profile Arun
Volunteer tester

Joined: Apr 30 08
Posts: 40
ID: 379
Credit: 10,385
RAC: 0
Message 4046 - Posted 10 Jun 2008 22:53:18 UTC

Since the WUs will have variable length, credits based on FLOPS would be my choice.

Profile David Ball
Forum moderator
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 274
ID: 115
Credit: 1,634,401
RAC: 0
Message 4061 - Posted 14 Jun 2008 16:29:19 UTC - in response to Message ID 4046 .

Since the WUs will have variable length, credits based on FLOPS would be my choice.


I'd agree with FLOPS but when assigning the credit per flop, I'd keep in mind other resources being used. IIRC, charmm takes a lot more memory than some other projects and was very disk or OS call intensive. I'm not sure Andre ever found out what was going on with the heavy disk/OS activity. On Linux, it showed up as large amounts of CPU time spent in "System" space. ISTR that many quad core machines (Often Macs) experienced vastly increased time per WU as more cores were working on docking until it basically paralyzed the machine. That was being worked on when the project shut down for the move. It may even have been a bug that was subsequently fixed in the BOINC client. There seemed to be a massive number of calls to request the time being made to the OS. I don't recall if that was ever linked to re-reading the script that runs charmm and possible updating of the last-access time for the script file or if it turned out to be something else entirely.

____________
The views expressed are my own.
Facts are subject to memory error :-)
Have you read a good science fiction novel lately?
STE\/E [BlackOpsTeam]
Volunteer tester

Joined: Nov 14 06
Posts: 47
ID: 292
Credit: 10,082,802
RAC: 0
Message 4065 - Posted 14 Jun 2008 20:25:11 UTC
Last modified: 14 Jun 2008 20:29:32 UTC

I take it that the Project has caved into some sort of Credit Reduction already from looking at these Wu's on mine http://docking.cis.udel.edu/result.php?resultid=1770 = 50 Credits Per Hour reported in Feb 2008 to this one reported in Mar 2008 > http://docking.cis.udel.edu/result.php?resultid=7007 = 20 Credits Per Hour ... ???

Profile Andre Kerstens
Forum moderator
Project tester
Volunteer tester
Avatar

Joined: Sep 11 06
Posts: 749
ID: 1
Credit: 15,199
RAC: 0
Message 4105 - Posted 25 Jun 2008 2:34:27 UTC - in response to Message ID 4061 .

Good point. No, I've not been able to figure out why these system calls to get the time of day on linux are made a gazillion times per run. I do think that this might cause the massive difference in runtime between linux and windows. The runtime difference issue is already on the project's to-do list, so I'll make sure that whatever notes I have on this will be passed on to the next person trying to crack this issue.

Cheers
Andre

Since the WUs will have variable length, credits based on FLOPS would be my choice.


I'd agree with FLOPS but when assigning the credit per flop, I'd keep in mind other resources being used. IIRC, charmm takes a lot more memory than some other projects and was very disk or OS call intensive. I'm not sure Andre ever found out what was going on with the heavy disk/OS activity. On Linux, it showed up as large amounts of CPU time spent in "System" space. ISTR that many quad core machines (Often Macs) experienced vastly increased time per WU as more cores were working on docking until it basically paralyzed the machine. That was being worked on when the project shut down for the move. It may even have been a bug that was subsequently fixed in the BOINC client. There seemed to be a massive number of calls to request the time being made to the OS. I don't recall if that was ever linked to re-reading the script that runs charmm and possible updating of the last-access time for the script file or if it turned out to be something else entirely.


____________
D@H the greatest project in the world... a while from now!
Profile Arun
Volunteer tester

Joined: Apr 30 08
Posts: 40
ID: 379
Credit: 10,385
RAC: 0
Message 4113 - Posted 25 Jun 2008 22:53:59 UTC - in response to Message ID 4105 .

Good point. No, I've not been able to figure out why these system calls to get the time of day on linux are made a gazillion times per run. I do think that this might cause the massive difference in runtime between linux and windows. The runtime difference issue is already on the project's to-do list, so I'll make sure that whatever notes I have on this will be passed on to the next person trying to crack this issue.

Cheers
Andre



The problem of execution time differing between windows and linux need to be solved before we move on to fixed credit based on FLOPS. I will be working on this issue tomorrow. Andre, can you pass me the notes you have on this issue ?

cheers,
Arun
Profile Andre Kerstens
Forum moderator
Project tester
Volunteer tester
Avatar

Joined: Sep 11 06
Posts: 749
ID: 1
Credit: 15,199
RAC: 0
Message 4116 - Posted 26 Jun 2008 0:28:00 UTC - in response to Message ID 4113 .

I've emailed you the notes I could find. Hope they will be a little bit useful.

Andre


The problem of execution time differing between windows and linux need to be solved before we move on to fixed credit based on FLOPS. I will be working on this issue tomorrow. Andre, can you pass me the notes you have on this issue ?

cheers,
Arun


____________
D@H the greatest project in the world... a while from now!
Profile David Ball
Forum moderator
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 274
ID: 115
Credit: 1,634,401
RAC: 0
Message 4123 - Posted 26 Jun 2008 16:39:36 UTC - in response to Message ID 4105 .

Andre wrote:

Good point. No, I've not been able to figure out why these system calls to get the time of day on linux are made a gazillion times per run. I do think that this might cause the massive difference in runtime between linux and windows. The runtime difference issue is already on the project's to-do list, so I'll make sure that whatever notes I have on this will be passed on to the next person trying to crack this issue.


Do you know for sure if the problem still exists? Unfortunately, I don't remember the details but while docking was shut down there was a fix mentioned on the BOINC developers mailing list (might have been the forums) that sounded to me like it might have been causing a similar problem. It's been a long time so I don't recall if it was in the BOINC client or in the application framework that was distributed. Since it didn't affect most applications, it must have been in a support function or something. A polling loop with no delay in it that was calling the OS time of day function to check elapsed time was what it sounded like. Might have had something to do with a heartbeat function. I'll see if I can find it.

____________
The views expressed are my own.
Facts are subject to memory error :-)
Have you read a good science fiction novel lately?
Profile Andre Kerstens
Forum moderator
Project tester
Volunteer tester
Avatar

Joined: Sep 11 06
Posts: 749
ID: 1
Credit: 15,199
RAC: 0
Message 4133 - Posted 27 Jun 2008 1:00:25 UTC - in response to Message ID 4123 .

Hmmm, that sounds interesting. Yes, please let the project team know if you find something on that. In the meantime, Arun could try commenting out the boinc calls in the charmm code and see if the time calls are still being made. If not, then that points in the direction you are thinking.

Andre


Do you know for sure if the problem still exists? Unfortunately, I don't remember the details but while docking was shut down there was a fix mentioned on the BOINC developers mailing list (might have been the forums) that sounded to me like it might have been causing a similar problem. It's been a long time so I don't recall if it was in the BOINC client or in the application framework that was distributed. Since it didn't affect most applications, it must have been in a support function or something. A polling loop with no delay in it that was calling the OS time of day function to check elapsed time was what it sounded like. Might have had something to do with a heartbeat function. I'll see if I can find it.


____________
D@H the greatest project in the world... a while from now!
Profile Arun
Volunteer tester

Joined: Apr 30 08
Posts: 40
ID: 379
Credit: 10,385
RAC: 0
Message 4135 - Posted 27 Jun 2008 15:55:57 UTC - in response to Message ID 4133 .

Hmmm, that sounds interesting. Yes, please let the project team know if you find something on that. In the meantime, Arun could try commenting out the boinc calls in the charmm code and see if the time calls are still being made. If not, then that points in the direction you are thinking.

Andre


Do you know for sure if the problem still exists? Unfortunately, I don't remember the details but while docking was shut down there was a fix mentioned on the BOINC developers mailing list (might have been the forums) that sounded to me like it might have been causing a similar problem. It's been a long time so I don't recall if it was in the BOINC client or in the application framework that was distributed. Since it didn't affect most applications, it must have been in a support function or something. A polling loop with no delay in it that was calling the OS time of day function to check elapsed time was what it sounded like. Might have had something to do with a heartbeat function. I'll see if I can find it.



Andre and David,
Thanks for the informative discussion. I used gprof profiling tool and found that the times() function is executed 7.02% of the time, which took 5.12 seconds out of the total 72.98 seconds for this charmm execution. times() was the 3rd most time consuming function after enbfs8 and ephifs fortran calls. The output of strace also showed that times() function is called many times. Any suggestions ?

David, any information you can find will be useful.

cheers
Arun
Profile David Ball
Forum moderator
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 274
ID: 115
Credit: 1,634,401
RAC: 0
Message 4137 - Posted 27 Jun 2008 18:41:47 UTC

So far, I haven't been able to find where I read it. IIRC, it was just a few months after Docking shut down at UTEP. I thought it was fixed at the time but I'm not sure on that and I can't find it in the archives or BOINC forums. You might have to ask on the BOINC DEV mailing list.

____________
The views expressed are my own.
Facts are subject to memory error :-)
Have you read a good science fiction novel lately?

Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 4244 - Posted 10 Aug 2008 22:13:00 UTC

The amount of credit granted on this project (particuarly for my Linux running Opterons at 8-10 cr/h) is not all that high, very low actually, as it is still based on the benchmark system.

Just wondering when can we expect the change to fixed system of FLOPS counting or whatever other system you are going with?

For the amount of hours/costs I am inputing I am not getting that much of a return.

Other than this problem, at the moment all is running well, xml data is exporting again, no more erroring work units so it's clear flying ahead (at least at the moment).

Conan

Keep smiling as it makes others wonder what you have been uo to.
____________

Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 4247 - Posted 12 Aug 2008 14:10:06 UTC - in response to Message ID 4244 .

The amount of credit granted on this project (particuarly for my Linux running Opterons at 8-10 cr/h) is not all that high, very low actually, as it is still based on the benchmark system.

Just wondering when can we expect the change to fixed system of FLOPS counting or whatever other system you are going with?

For the amount of hours/costs I am inputing I am not getting that much of a return.

Other than this problem, at the moment all is running well, xml data is exporting again, no more erroring work units so it's clear flying ahead (at least at the moment).

Conan

Keep smiling as it makes others wonder what you have been uo to.


I just created a thread to update you all on the next steps to move (finally) to beta. The issue with the credits has change since we discussed it in one of our thread. Unfortunately each work-unit does not take a deterministic amount of time. This discourages us from using a fixed amount of credits. We have done major changes to the code of charmm, and now we use the same charmm source for Windows and Linux with the same compiler optizations. This should preven significant differences betweeen the Windows and Linux versions (that were observed in the past).

Sure we can increase the amount of credits per flops. Also we want to identify those volunteers who give us the best results. We are working on a web-page that classify the top results and its volunteers.

Our goal is to have D@H in Beta on September 1. We are moving forward!

Michela





____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!
Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 4259 - Posted 13 Aug 2008 11:43:37 UTC - in response to Message ID 4247 .

The amount of credit granted on this project (particuarly for my Linux running Opterons at 8-10 cr/h) is not all that high, very low actually, as it is still based on the benchmark system.

Just wondering when can we expect the change to fixed system of FLOPS counting or whatever other system you are going with?

For the amount of hours/costs I am inputing I am not getting that much of a return.

Other than this problem, at the moment all is running well, xml data is exporting again, no more erroring work units so it's clear flying ahead (at least at the moment).

Conan

Keep smiling as it makes others wonder what you have been uo to.


I just created a thread to update you all on the next steps to move (finally) to beta. The issue with the credits has change since we discussed it in one of our thread. Unfortunately each work-unit does not take a deterministic amount of time. This discourages us from using a fixed amount of credits. We have done major changes to the code of charmm, and now we use the same charmm source for Windows and Linux with the same compiler optizations. This should preven significant differences betweeen the Windows and Linux versions (that were observed in the past).

Sure we can increase the amount of credits per flops. Also we want to identify those volunteers who give us the best results. We are working on a web-page that classify the top results and its volunteers.

Our goal is to have D@H in Beta on September 1. We are moving forward!

Michela






G'Day Michela,
Great to hear that the project is moving forward at a much quicker rate now.
Things are starting to run more smoothly which will help.
You and your teams rapid responses on the forum is a big plus and much appreciated.
Thanks for info on what's happening and thanks also for the credit note.

With regard to the new keys, I will finish what I currently have then detach and reattach each Linux machine.
The Windows machine is going well now after detaching and reattaching twice.
____________
Profile adrianxw
Volunteer tester
Avatar

Joined: Dec 30 06
Posts: 164
ID: 343
Credit: 1,669,741
RAC: 0
Message 4264 - Posted 13 Aug 2008 18:25:52 UTC
Last modified: 13 Aug 2008 18:26:26 UTC

For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP).
____________
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.

Profile Cori
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 161
ID: 90
Credit: 5,817
RAC: 0
Message 4274 - Posted 15 Aug 2008 15:34:16 UTC - in response to Message ID 4264 .
Last modified: 15 Aug 2008 15:37:04 UTC

For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP).

I have found quite similar results for my dual core lappy (T7700@2.4 Ghz) under Win x64:

  • Docking@home 14.63 cr/h
  • World Community Grid 16.36 cr/h
  • ABC@home 19.53 cr/h
  • Simap 21.91 cr/h
  • Einstein@Home 24.26 cr/h
  • QMC 29.22 cr/h
  • PrimeGrid 34.24 cr/h
  • MilkyWay@Home 35.88 cr/h


----> more details and source stats from here .
____________
Bribe me with Lasagna!! :-)

Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 4275 - Posted 15 Aug 2008 15:55:41 UTC - in response to Message ID 4274 .

For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP).

I have found quite similar results for my dual core lappy (T7700@2.4 Ghz) under Win x64:

  • Docking@home 14.63 cr/h
  • World Community Grid 16.36 cr/h
  • ABC@home 19.53 cr/h
  • Simap 21.91 cr/h
  • Einstein@Home 24.26 cr/h
  • QMC 29.22 cr/h
  • PrimeGrid 34.24 cr/h
  • MilkyWay@Home 35.88 cr/h


----> more details and source stats from here .



We definitely need to give you all more credits!!!

I will look at this today.

Michela


____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!
Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 4276 - Posted 15 Aug 2008 17:35:54 UTC

My puter has this C/h rates for various projects (decending order):
Project : C/h granted
Cosmology : 105,82
Milkyway : 97,44
PrimeGrid : 61,81
Einstein : 60,02
Riesel : 52,46
QMC : 42,81
CPDN : 42,99
GPUgrid : 39,71
yoyo : 37,21
Simap : 35,26
Tanpaku : 34,28
Magnetism : 30,80
Poem : 28,27
BURP : 26,12
Lattice : 26,31
UTC malaria : 26,45
Malaria : 26,19
Rosetta : 25,48
BCL : 25,51
Leiden : 25,44
ViP : 25,20
RALPH : 25,18
IberCivis : 25,00
WCG : 23,07
Docking : 21,15
LHC : 20,62
BOINCalpha : 20,25
GenLife : 18,94
Orbit : 18,09
Superlink : 10,11
Pirates : 9,22


____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile Cori
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 161
ID: 90
Credit: 5,817
RAC: 0
Message 4280 - Posted 18 Aug 2008 17:29:40 UTC - in response to Message ID 4275 .

For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP).

I have found quite similar results for my dual core lappy (T7700@2.4 Ghz) under Win x64:

  • Docking@home 14.63 cr/h
  • World Community Grid 16.36 cr/h
  • ABC@home 19.53 cr/h
  • Simap 21.91 cr/h
  • Einstein@Home 24.26 cr/h
  • QMC 29.22 cr/h
  • PrimeGrid 34.24 cr/h
  • MilkyWay@Home 35.88 cr/h


----> more details and source stats from here .



We definitely need to give you all more credits!!!

I will look at this today.

Michela


Hey, that sounds good! Any news yet? *grin*
____________
Bribe me with Lasagna!! :-)
Profile DoctorNow
Volunteer tester
Avatar

Joined: Nov 13 06
Posts: 7
ID: 217
Credit: 92,503
RAC: 0
Message 4281 - Posted 18 Aug 2008 20:29:56 UTC - in response to Message ID 4275 .
Last modified: 18 Aug 2008 20:31:12 UTC

We definitely need to give you all more credits!!!

Yes, agreed.
The current ones are benchmark-depending because of quorum 1, that's a nice and easy way for cheaters to use optimized clients.
Think about some fixed credits maybe.

My puter has this C/h rates for various projects (decending order):
Project : C/h granted
Cosmology : 105,82

Obviously you haven't crunched Cosmo for a while, Saenger.
They fell down to a level under SETI! ;)
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 4282 - Posted 18 Aug 2008 21:26:11 UTC - in response to Message ID 4281 .
Last modified: 18 Aug 2008 21:28:14 UTC

We definitely need to give you all more credits!!!

Yes, agreed.
The current ones are benchmark-depending because of quorum 1, that's a nice and easy way for cheaters to use optimized clients.
Think about some fixed credits maybe.

I claim considerably less than benchmarks would demand. It's quorum=1, but no benches.
My puter has this C/h rates for various projects (decending order):
Project : C/h granted
Cosmology : 105,82

Obviously you haven't crunched Cosmo for a while, Saenger.
They fell down to a level under SETI! ;)


Yes, I saw that one too late, it's a long time sample, currently they are at about 25.5 C/h, that's about what I claim.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
Profile DoctorNow
Volunteer tester
Avatar

Joined: Nov 13 06
Posts: 7
ID: 217
Credit: 92,503
RAC: 0
Message 4283 - Posted 18 Aug 2008 22:28:59 UTC - in response to Message ID 4282 .

I claim considerably less than benchmarks would demand. It's quorum=1, but no benches.

Hm, then why do your results always getting what they are claiming?
It's the same for me - claim = grant, so it must be benchmark-depending. ;-)
And I note down my WUs in an Excel-worksheet.
The average per hour for the last WUs was always the same. With fixed credits it would have varied. ;)
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 4284 - Posted 19 Aug 2008 2:22:33 UTC - in response to Message ID 4283 .

I claim considerably less than benchmarks would demand. It's quorum=1, but no benches.

Hm, then why do your results always getting what they are claiming?
It's the same for me - claim = grant, so it must be benchmark-depending. ;-)
And I note down my WUs in an Excel-worksheet.
The average per hour for the last WUs was always the same. With fixed credits it would have varied. ;)


Hi, claim = grant because we no longer replicate. We are testing a post-processing algorithm for the results (clustering results based on deviations and energies). Trilce is out of town this week but when back next week she will tell us more about how the algorithm works. The positive thing is that we are collecting a lot of scientific data.

Michela


____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!
Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 4291 - Posted 22 Aug 2008 10:01:06 UTC

I've got claim=grant as well on Einstein and CPDN, both are not using benches like Docking. Only they both give me considerably more than I would claim with benches.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 4458 - Posted 3 Oct 2008 12:35:46 UTC - in response to Message ID 4275 .

For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP).

I have found quite similar results for my dual core lappy (T7700@2.4 Ghz) under Win x64:

  • Docking@home 14.63 cr/h
  • World Community Grid 16.36 cr/h
  • ABC@home 19.53 cr/h
  • Simap 21.91 cr/h
  • Einstein@Home 24.26 cr/h
  • QMC 29.22 cr/h
  • PrimeGrid 34.24 cr/h
  • MilkyWay@Home 35.88 cr/h


----> more details and source stats from here .



We definitely need to give you all more credits!!!

I will look at this today.

Michela



It has been 42 days since last post, so I was wondering if there has there been any progress on this Michela?
The granted credit is still very low, it is even lower on computers that benchmark poorly, (as per my Linux machines compared to my Windows machines (even with the same hardware)).

Thanks and keep up the good work.
____________
Astro
Avatar

Joined: Sep 1 08
Posts: 5
ID: 405
Credit: 201,664
RAC: 0
Message 4461 - Posted 4 Oct 2008 19:44:36 UTC
Last modified: 4 Oct 2008 20:04:23 UTC

Being curious, and since I have several "dual boot" machines, I took a look at benchmarks for the same machine on windows VS Linux. (note: all are 64 bit Linux boinc versions, The Windows benchmarks are 64bit except where noted and on AMD processors )

AMD 9950 BE Quad
Win 2231/7219 SUM=9450
Lin 2129/5252 SUM=7381

AMD 9950 BE Quad
Win 2235/7203 SUM=9438
Lin 2086/6043 SUM=8129

AMD64 X2 6000 Dual
Win 2879/7481 SUM=10360
Lin 2891/7465 SUM=10356

AMD64 X2 5200 Dual
Win 2567/6747 SUM=9314
Lin 2632/6345 SUM=8977

AMD64 X2 4800 Dual
Win 2586/4662 SUM=7248 (32bit windows)
Lin 2371/5998 SUM=8369

AMD64 3700 Single
Win 2316/4209 SUM=6525 (32bit windows)
Lin 2129/5252 SUM=7381

Since the formula for "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000 I also included the sum of Whet and Dhry. Meaning the only part left is "Cpu Seconds", which I'll now collect avg data on and see if Windows has an advantage over Linux.

Be back soon.

(note: NO intel data available
and all data taken from my publicly available list of hosts)

Astro
Avatar

Joined: Sep 1 08
Posts: 5
ID: 405
Credit: 201,664
RAC: 0
Message 4462 - Posted 4 Oct 2008 21:31:55 UTC
Last modified: 4 Oct 2008 22:17:34 UTC

OK, I've collected all the wus that exist in the database for those machine/os combinations. I've found several very short wus (<1400sec) and deleted them from all records. Here's the average cpu seconds per OS, sample size, and "claimed credit per second" based upon the benchmarks in the previous post using ((whetstone + Dhrystone) x 3600)/1728000.

AMD9950one BE Quad
Lin, 3609 seconds/ (230 samples)/ 15.38 credit/hour based upon benchmark
Win, 3673 seconds/ (12 samples)/ 19.69

AMD9950two BE Quad
Lin, 3554 (236) 15.38
Win, 3584 (9) 19.66

AMD64 X2 6000 dual
Lin, 3292 (124) 21.58
Win, 3543 (11) 21.58

AMD64 X2 5200 dual
Lin, 3580 (108) 18.70
Win, 3622 (6) 19.40

AMD64 X2 4800 dual
Lin, 4029 (98) 17.43
Win, 4348 (5) 15.1 (32bit windows)

AMD64 3700 single
Lin, 4275 (18) 15.38
Win, 4185 (39) 13.59 (32 bit windows)

to be honest I'm not seeing a big difference either way. Looks like I should run linux on some, windows on some. Linux generally is faster per wu but claims less/hour. Draw your own conclusions.

Dr Dan T. Morris
Avatar

Joined: Sep 3 08
Posts: 19
ID: 561
Credit: 1,563,073
RAC: 0
Message 4463 - Posted 5 Oct 2008 2:18:05 UTC

I vote 33.8 per CPU hour like QMC. (Q6600 @ 2.4GHz, Win XP

Just a thought..
____________


Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 4468 - Posted 6 Oct 2008 15:33:20 UTC
Last modified: 12 Oct 2008 0:01:37 UTC

Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (malicious sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users.

We need to find a value for the credit that is beneficial for you but does not attract malicious users, as I just told you, we are trying to find a solution, so keep tuned, I'll let you know as soon as we have an agreement

edit: let me correct my words, I never meant users that just have fun with the settings of their computers or compete with each other for credit, that is part of VC and bring to life projects like this. I meant malicious attackers that tamper with the BOIC mechanisms of the credit system, which at the end results in unfairness for the VC comunity

Astro
Avatar

Joined: Sep 1 08
Posts: 5
ID: 405
Credit: 201,664
RAC: 0
Message 4478 - Posted 7 Oct 2008 11:50:36 UTC
Last modified: 7 Oct 2008 12:03:25 UTC

have you all seen project comparison like the one done by Boincstats? project credit comparison . Doing a quick count before my first cup of joe shows that out of the 50 projects listed, 25 pay more than Docking and 24 pay less than docking(might wanna recount for yourselves). Don't know what you/others can make out of this, or even how they come up with these numbers. Also, there's a comparison done by "allprojectstats" but can't find a link ATM.

Ofcourse instead of doing it that way, you could find the median score by adding the value of every quantity under the "Docking" column. That yeilds 79, then subtract the freakish 17.95 of PS3Grid and you get 61.05. Now divide that by the number of projects 50 minus PS3grid = 49, minus yourself which didn't add the 1 into the total and you get 48 projects, so 61.05/48 = 1.27. So the median is 1.27 or a need for a 27% increase. If you went this way, then you wouldn't be in the middle anymore.

Aren't numbers cool, you can twist them any way you want.

tony
Heck, just didn't know if you'd seen it or even if the numbers from Allproject stats comes out differently.

tony

Profile David Ball
Forum moderator
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 274
ID: 115
Credit: 1,634,401
RAC: 0
Message 4482 - Posted 7 Oct 2008 21:34:07 UTC

The site I usually see mentioned when project admins are talking about cross-project-parity is at

http://boinc.netsoft-online.com/e107_plugins/boinc/get_cpcs.php

I think that site has a bit more recent snapshot of the credit situation and seems to show changes fairly quickly. Here's a few examples from todays numbers:

SETI: 1.000 (Always 1.000 - the standard for comparison)
Rosetta: 0.891
Docking: 0.727
WCG: 0.841
CPDN: 0.905
Malaria: 0.907
Lattice: 0.972
POEM: 1.172
Superlink: 1.020
Einstein: 1.162
Primegrid: 1.173
SIMAP: 1.382
QMC: 1.548
Milkyway: 1.908

The numbers have been gradually moving lately with many projects getting closer to SETI.


____________
The views expressed are my own.
Facts are subject to memory error :-)
Have you read a good science fiction novel lately?

Profile adrianxw
Volunteer tester
Avatar

Joined: Dec 30 06
Posts: 164
ID: 343
Credit: 1,669,741
RAC: 0
Message 4487 - Posted 9 Oct 2008 16:16:12 UTC
Last modified: 9 Oct 2008 16:24:13 UTC

The problem with SETI = 1 is that there is no way from the exported xml stats to see who is and who is not using an optimised science app, core client, or both, (and if they are, which ones).

A further problem is that not all projects have the same demands. Should a project that crunches for an hour, using 20k of memory, 5 disk writes of 80 bytes and a wu download size of 20 bytes give the same credit as one that crunches for an hour, needs 500M memory, writes 50k checkpoints every 3 seconds and has a wu download size of 60M? How about another one with one hour wu's and other figures the same as the first example above, but 1 wu in 3 fails?
____________
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.

Dr Dan T. Morris
Avatar

Joined: Sep 3 08
Posts: 19
ID: 561
Credit: 1,563,073
RAC: 0
Message 4489 - Posted 10 Oct 2008 6:45:46 UTC

Dear Admins and project managers.

For over 4 years now I have had this nice hobby that I can work on at home. It’s called distributed computing. And in just the last year the fun and excitement has started to dwindle down to whose going to start a peeing match with whomever over wu credits. And then I see this post by your project manager listed below.

>>> Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users.

We need to find a value for the credit that is beneficial for you but does not attract those users, as I just told you, we are trying to find a solution, so keep tuned, I'll let you know as soon as we have an agreement<<<

1. Let’s start by telling you that I have over $30,000 dollars in computer systems, of which I removed off of your project just as soon as I saw this Posting above.
2. This statement refers to folks such as me, as scum of the earth in its relayed meaning, just because I want to have fun with my hobby and bragging rights to how fast I can make my computers run and still be stable.
3. People like me and teams that think like me are getting a little ticked off from everyone who just wants us to run stock equipment and stock applications.
4. When projects start paying for our computers and we want to sell the computer time to them, then and only then will I cow down and do as you say.
5. The concept of distributed computing is to use volunteers’ computers to get work done for the betterment of science and mankind. The operative word here is Volunteers.
6. And those of us who try and have fun and get the work done correctly and faster are being put down for our efforts and our goals.
7. Well folks somewhere down the road you will run us off until there is no longer any reason for us to participate.
8. The project admin’s will all have the same credit and all of the projects will be run by the zero crew.
9. What a perfect world you will have then.

Take care and good luck on your project.

DD,

____________


Kevint

Joined: Jun 26 08
Posts: 10
ID: 389
Credit: 2,724,494
RAC: 0
Message 4490 - Posted 10 Oct 2008 14:49:01 UTC - in response to Message ID 4468 .
Last modified: 10 Oct 2008 14:51:59 UTC

Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users.

We need to find a value for the credit that is beneficial for you but does not attract those users, as I just told you, we are trying to find a solution, so keep tuned, I'll let you know as soon as we have an agreement



LESS ??? and why would you want to start to grant less? You are already one of the lowest paying projects around.

If you want to increase your participant base and get more volenteers to help with your projects you should be at least on par with the other projects.

And - what does overclocking have to do with it ? Just asking. Overclocking just to gain more credit - isn't this part of the fun. to overclock machines to see how fast we can make them go? Nearly ALL my machines are highly overclocked.

For example - over clocked
Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 4491 - Posted 10 Oct 2008 16:08:49 UTC
Last modified: 10 Oct 2008 17:51:58 UTC

Hi zeitgeistmovie.com,

I never said to give you guys LESS credit!! I woudn't even dare to say that without first hiding myself in the most obscure place in Earth. No, I was saying that when a project uses redundancy it usually grants less credit to half of the participants because it has to perform an average over a set of claimed credits for the same workunit (usually removing the highest and the lowest), so if you are claiming 20, but the average says 15, you will receive 15. Right now, the way in which we are working is: if you claim 20 you receive 20. Just to make this point clear, we are discussing to give you MORE credit, not less.

Now, going back to the overclocking, many of the scientific applications that projects like us are running are chaotic applications, that implies that very small divergences result in completely different results. With overclocking there is the risk that one of the millions of floating point operations that are executed per workunit flip a bit. This flipped-bit, if occurs sooner in the execution of the wu will carry an error that will accumulate over the time, so, at the end of the wu execution we might have a useless result. That is basically the main problem with overclocking and also some sw modifications. Not all the overclocked machines will have the problem, but some of them will

Regards

Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 4492 - Posted 10 Oct 2008 16:30:21 UTC - in response to Message ID 4489 .
Last modified: 10 Oct 2008 16:42:58 UTC

Dear DD,

First of all, I didn't mean to offend you. I'm so sorry if I did and I apologize. The same is to anybody else who feels this way. That was not my intention.

Second. we don't want to interfere with your fun, not at all. If you are able to do as you say: run your computers faster and still be stable , then do it, we are happy with that

j2satx
Volunteer tester

Joined: Dec 22 06
Posts: 183
ID: 339
Credit: 16,191,581
RAC: 0
Message 4493 - Posted 10 Oct 2008 16:50:28 UTC - in response to Message ID 4468 .

Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users.

We need to find a value for the credit that is beneficial for you but does not attract those users, as I just told you, we are trying to find a solution, so keep tuned, I'll let you know as soon as we have an agreement


All my computers are over-clocked. My results must be good if your system validates them.

I agree with the issue of modifying the software application. If the project starts allowing or promoting "optimized" apps without project testing and acceptance, then I will leave. I do not run any projects that allow optimized apps that are not under project control.
j2satx
Volunteer tester

Joined: Dec 22 06
Posts: 183
ID: 339
Credit: 16,191,581
RAC: 0
Message 4494 - Posted 10 Oct 2008 16:58:54 UTC - in response to Message ID 4468 .
Last modified: 10 Oct 2008 17:12:12 UTC

Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users.

We need to find a value for the credit that is beneficial for you but does not attract those users, as I just told you, we are trying to find a solution, so keep tuned, I'll let you know as soon as we have an agreement


I have reduced my resources by 50%, while you check to see if the results from my over-clocked computers are bad .

edit: I changed my mind...no need to have 50% of my computers giving you bad results. I have suspended all WUs until you verify that my computers are giving you good results.
Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 4495 - Posted 10 Oct 2008 17:14:17 UTC - in response to Message ID 4493 .


I agree with the issue of modifying the software application. If the project starts allowing or promoting "optimized" apps without project testing and acceptance, then I will leave. I do not run any projects that allow optimized apps that are not under project control.


Hi j2satx,

Users cannot recompile our application, so, they cannot run an optimized version of it. About your resources, we haven't had any complain with your results, when we get invalid results we usually send an email to the owner of the host.
Nite Owl
Avatar

Joined: Oct 5 08
Posts: 6
ID: 2080
Credit: 764,296
RAC: 0
Message 4497 - Posted 10 Oct 2008 18:19:59 UTC - in response to Message ID 4495 .


I agree with the issue of modifying the software application. If the project starts allowing or promoting "optimized" apps without project testing and acceptance, then I will leave. I do not run any projects that allow optimized apps that are not under project control.


Hi j2satx,

Users cannot recompile our application, so, they cannot run an optimized version of it. About your resources, we haven't had any complain with your results, when we get invalid results we usually send an email to the owner of the host.

I'm sure j2satx was talking about optimized BOINC application, not your project app... Optimizing BOINC can indeed increase the amount of credits received when using the Benchmark criteria...
____________
Teddies at Docking@Home
Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 4498 - Posted 10 Oct 2008 18:28:25 UTC - in response to Message ID 4497 .

Hi Nite Owl, thank you for the correction.

In that case, I'm not sure, I will ask and let you know.

j2satx
Volunteer tester

Joined: Dec 22 06
Posts: 183
ID: 339
Credit: 16,191,581
RAC: 0
Message 4499 - Posted 10 Oct 2008 19:41:20 UTC - in response to Message ID 4497 .
Last modified: 10 Oct 2008 19:42:57 UTC


I agree with the issue of modifying the software application. If the project starts allowing or promoting "optimized" apps without project testing and acceptance, then I will leave. I do not run any projects that allow optimized apps that are not under project control.


Hi j2satx,

Users cannot recompile our application, so, they cannot run an optimized version of it. About your resources, we haven't had any complain with your results, when we get invalid results we usually send an email to the owner of the host.

I'm sure j2satx was talking about optimized BOINC application, not your project app... Optimizing BOINC can indeed increase the amount of credits received when using the Benchmark criteria...


I was talking about the project app. I don't think it is possible for the project to have any control over the BOINC Client, but someone has to monitor that WUs are processed within reasonable boundaries, to prevent getting excessive credits if someone has modified the BOINC Client.
Tore Zachariassen

Joined: Oct 25 08
Posts: 1
ID: 2913
Credit: 180,025
RAC: 0
Message 4535 - Posted 29 Oct 2008 10:18:28 UTC

I don't understand why there has to be this big difference between my 'puters with different OS when I compare them up against the formula "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000??
I get granted 106% when I use the PC with Windows Vista 32bit (granted 11,08 per hour, the formula gives me 10,40/hour). Then I have one with Linux Ubuntu 32bit - and I get only 91% (granted=12,62 - the formula gives 13,74), but the worst case is the two computers with Linux Ubuntu 64bit :-(. They give me 79% !!! - (the first one - granted 14,95 - formula=18,75, and the other one gives granted=18,95, the formula gives 23,91)
Again - why???
Is the Windows32bit-system much more effective? If we want to have the most effective DC-community world wide, then all of us have to use our computers were they do most work - then we will get more work done with less power - (anyway that is just my opinion). That's why I have to ask if this differences between the credit granted to the different OS, is showing us that Windows 32bit-system is much more effective than Linux 64bit-OS ?? Can I assume that my Linux 64bit-'puters' will do a better job elsewere, and they are not efficiant enough here at Docking??
Or - is there another explanation?

Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 4604 - Posted 10 Nov 2008 21:55:32 UTC
Last modified: 10 Nov 2008 21:56:59 UTC

Have found time to post some averages for my machines

(bear in mind that I take total computational time I have provided (i.e. includes all work done, successful or not) and the total granted credit given by the project (includes zero credit jobs) as the basis for my calculations. The reason is I have donated this computational time and compare it to the credits I have in return for this donated computational time).

I will use 2 machines for comparison as both have the same processor, M/Board and RAM (AMD Opteron 285), one has Windows and one has Linux

[list]
Project 285 Linux 285 Windows

Cosmology---- 22.45---- 28.93
Docking------ 12.79---- 16.33
Lattice------ 11.72---- 18.13 (normally 13.74)
LHC---------- 15.59
Rosetta------ 14.52---- 10.67 (was around 14)
Ralph-------- 11.95---- 13.14
QMC---------- 29.59---- 16.49 (normally over 23)
Superlink---- 15.10---- 18.00
Hydrogen----- 0.00----- 20.68
MilkyWay----- 0.00----- 26.00
SpinHenge---- 9.63----- 16.26

Seti I only do on a AMD 4800+ and it gets around 20 with an very old optimised app which is now not very much different to a stock app.
I have not included Optimised apps in this sample.
MilkyWay figures are for the normal app not the new optised app.
Some figures are a few months old but all over the last 6 months or so.
Lattice result in Windows influenced by a 204 hour Garli WU.
Cosmology on Windows has also dropped since this sample was done so closer to 22 than 29.
No current data for CPDN and Einstein.
If I was able to get the same output for Linux as I get for Windows I would have a much higher RAC on nearly all projects.

Docking is very much lower than most projects I do, Particularly on Linux.

The figure I get on the AMD 4800+ for Docking is 14.70 alongside Seti at 20.26.

Thanks Conan.
____________

Profile Cori
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 161
ID: 90
Credit: 5,817
RAC: 0
Message 4738 - Posted 17 Jan 2009 12:14:34 UTC
Last modified: 17 Jan 2009 12:14:53 UTC

*Bump*

With all these nice changes going on recently the credits issue seems to have gone down a bit on the priority list... *grin*

Really, you have all a project could wish for: great forums, neat applications for different OSes (even 64-bit is supported)... if only the credits weren't so low!
My dual core lappy runs under XP 64-bit and usually I'm getting around 30-40 credits/hour on most of the projects. Some are around 25-30 cr/h but here I'm getting under 15cr/h!

Please, can you do something about it? *fluttering my eyelashes* :-)))
____________
Bribe me with Lasagna!! :-)

Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 4917 - Posted 20 Apr 2009 5:06:33 UTC - in response to Message ID 4738 .

*Bump*

With all these nice changes going on recently the credits issue seems to have gone down a bit on the priority list... *grin*

Really, you have all a project could wish for: great forums, neat applications for different OSes (even 64-bit is supported)... if only the credits weren't so low!
My dual core lappy runs under XP 64-bit and usually I'm getting around 30-40 credits/hour on most of the projects. Some are around 25-30 cr/h but here I'm getting under 15cr/h!

Please, can you do something about it? *fluttering my eyelashes* :-)))


"BUMP" again.

Hello project team, has there been any movement or progress with the increaing of credit granted by this project ???

It has been quite a while since anything has been heard.
____________
Profile Scientific Frontline
Avatar

Joined: Mar 25 09
Posts: 42
ID: 8725
Credit: 788,015
RAC: 0
Message 4919 - Posted 20 Apr 2009 21:02:50 UTC



Have to add that such concerns of low credit has been mentioned with our team.
An issue that should be addressed soon would be appreciated. Since other projects are awarding more credits for less cpu time. For me it is not an issue, but for serious crunchers... that is their game one could say.
Heidi-Ann Kennedy
____________

Recognized by the Carnegie Institute of Science . Washington D.C.

Profile adrianxw
Volunteer tester
Avatar

Joined: Dec 30 06
Posts: 164
ID: 343
Credit: 1,669,741
RAC: 0
Message 4920 - Posted 21 Apr 2009 8:44:04 UTC
Last modified: 21 Apr 2009 8:50:39 UTC

Changing the credit method means that work done before the change is worth less credit then after the change. That is not fair to all that crunch now. You might want to consider that in your deliberations.

Frankly, I have no problem with the credit system now, I can see what others doing this project are doing. Why does it matter that places like MilkyWay give more credit then you. People that are credit obsessed tend to flock around the high payers, you can see it in there portfolio. I know what I think of them, but whatever floats your boat.
____________
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.

Profile Scientific Frontline
Avatar

Joined: Mar 25 09
Posts: 42
ID: 8725
Credit: 788,015
RAC: 0
Message 4921 - Posted 21 Apr 2009 13:17:26 UTC - in response to Message ID 4920 .
Last modified: 21 Apr 2009 13:44:45 UTC

Changing the credit method means that work done before the change is worth less credit then after the change. That is not fair to all that crunch now. You might want to consider that in your deliberations.

Frankly, I have no problem with the credit system now, I can see what others doing this project are doing. Why does it matter that places like MilkyWay give more credit then you. People that are credit obsessed tend to flock around the high payers, you can see it in there portfolio. I know what I think of them, but whatever floats your boat.



Dear adrianxw,
I agree to a point. As I stated I am fine with such, but the importance here is contribution to science / Docking at home, and if it's numbers that the crunchers need to be active for such a worthy cause, then it is numbers that a project needs to supply for those that are donating computing time. It is a win win situation no mater how one feels about the number obsession. It's not how you, me, or anyone else feels about it... it's what we can do to promote Docking@home to those that have the equipment to calculate the work units. Those you so call obsessed are also those that can do the most good for science, those are the one we need to cater to for the important work that Docking is trying to achieve. Not seeing that factor, is overlooking the most valuable variable in the distributed computing system.

Again to me it is all about the science and not the numbers in whole, yet the science needs the obsession of serious crunchers.

Sincerely,
Heidi-Ann Kennedy
____________

Recognized by the Carnegie Institute of Science . Washington D.C.
zombie67 [MM]
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 207
ID: 114
Credit: 2,817,648
RAC: 0
Message 4967 - Posted 1 May 2009 1:19:39 UTC

SETI@home changes their credits all the time. And they are supposed to be the freakin' benchmark. And all the projects are supposed to be matching SETI, which means all the projects have to constantly re-adjust too. *sigh*
____________
Dublin, CA
Team SETI.USA

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5108 - Posted 5 Jul 2009 22:16:31 UTC - in response to Message ID 4535 .
Last modified: 5 Jul 2009 22:49:57 UTC

I don't understand why there has to be this big difference between my 'puters with different OS when I compare them up against the formula "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000??

Because Docking uses
credit = CPU seconds * (19 * Whetstone + Dhrystone) / 12,096,000

That's the reason it claims so low on most machines, because the floating point benchmark score is usually a lot lower than the integer one. Granting 1000 credits a week instead of 100 credits a day for the virtual 1000 MFlops and 1000 MIPs standard computer does not help as real ones have normally a Whetstone benchmark factor 2 higher than the Dhrystone value.


@ the project staff:
By the way, using benchmark based credits without quorum is really brain-damaged if you are concerned about the malicious behaviour of some people (as it appears you are judging from some comments here), as it is one of the easiest things to manipulate those benchmark scores. Furthermore the whole BOINC integrated benchmark stuff is seriously flawed as it varies a lot between different OSs or BOINC versions. Furthermore it does not value architectural improvements of the CPUs and the ecosystem which don't improve the benchmark scores. Just as an example, for the exactly same WU, an AMD Phenom running XP64 gets about 88 credits, a Phenom with XP32 104 credits, an AthlonX2 running WinXP32 gets 114 credits and an Intel Core i7 (with Hyperthreading reducing the performance per individual thread) under XP32 gets even 124 credits for the exact same work. That does not look right to me! Awarding one system almost 50% higher credits for the same work because it is actually slower for the achieved benchmark score is really the wrong way to tackle the credit issue ;)

It isn't that hard to implement fixed credits as the WUs appear to be very even sized. It is the second best thing after flops based credits (i.e. really counting the executed operations in the code) and has the advantage that it could be implemented immediately.

This credit stuff is important to a lot of crunchers. So if you want to secure or even extend your user base, you should think about starting to credit at least on par to other projects. Look at Spinhenge for instance! They adopted a fixed credit scheme half a year ago and it works really well. With such a scheme in place there is no way to "cheat" to get more credits than others. Besides using more resources to crunch of course ;)
Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 5111 - Posted 6 Jul 2009 15:03:17 UTC - in response to Message ID 5108 .

I don't understand why there has to be this big difference between my 'puters with different OS when I compare them up against the formula "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000??

Because Docking uses
credit = CPU seconds * (19 * Whetstone + Dhrystone) / 12,096,000

That's the reason it claims so low on most machines, because the floating point benchmark score is usually a lot lower than the integer one. Granting 1000 credits a week instead of 100 credits a day for the virtual 1000 MFlops and 1000 MIPs standard computer does not help as real ones have normally a Whetstone benchmark factor 2 higher than the Dhrystone value.


@ the project staff:
By the way, using benchmark based credits without quorum is really brain-damaged if you are concerned about the malicious behaviour of some people (as it appears you are judging from some comments here), as it is one of the easiest things to manipulate those benchmark scores. Furthermore the whole BOINC integrated benchmark stuff is seriously flawed as it varies a lot between different OSs or BOINC versions. Furthermore it does not value architectural improvements of the CPUs and the ecosystem which don't improve the benchmark scores. Just as an example, for the exactly same WU, an AMD Phenom running XP64 gets about 88 credits, a Phenom with XP32 104 credits, an AthlonX2 running WinXP32 gets 114 credits and an Intel Core i7 (with Hyperthreading reducing the performance per individual thread) under XP32 gets even 124 credits for the exact same work. That does not look right to me! Awarding one system almost 50% higher credits for the same work because it is actually slower for the achieved benchmark score is really the wrong way to tackle the credit issue ;)

It isn't that hard to implement fixed credits as the WUs appear to be very even sized. It is the second best thing after flops based credits (i.e. really counting the executed operations in the code) and has the advantage that it could be implemented immediately.

This credit stuff is important to a lot of crunchers. So if you want to secure or even extend your user base, you should think about starting to credit at least on par to other projects. Look at Spinhenge for instance! They adopted a fixed credit scheme half a year ago and it works really well. With such a scheme in place there is no way to "cheat" to get more credits than others. Besides using more resources to crunch of course ;)


I agree with what you say Cluster Physik and sadly I have stopped Docking on one computer due to the 27% drop in benchmark scores on my one Linux machine after "upgrading" Boinc from 5.10.21 to 6.4.5, so not impressed about that.
What I earn on my Linux machines pales against a Windows machine even with the same components.

(Also at SpinHenge the Windows apps run 30% or more faster than the Linux apps (at least on my AMD Opterons), so the fixed credit awarded on the Spinhenge project benefits Windows at the moment.
They are still working on a faster Linux app (for over a year now I think but they say they will introduce one).

Conan
____________
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5112 - Posted 10 Jul 2009 9:18:34 UTC

No feedback?

Maybe the project should have a look to the recent events at Aqua@home. Such things could happen here too as long as you don't fix your credits!

I don't want to demonstrate it as Alliance Francophone has done it there, but it is really easy to cheat here!

Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 5114 - Posted 11 Jul 2009 15:22:05 UTC - in response to Message ID 5112 .

No feedback?

Maybe the project should have a look to the recent events at Aqua@home. Such things could happen here too as long as you don't fix your credits!

I don't want to demonstrate it as Alliance Francophone has done it there, but it is really easy to cheat here!


We understand that credits are important for some of our volunteers. Still the issue of the credits is an open issue. We had, for a short time, a fixed amount of credits per result but then some of our volunteers felt that they were penalized because they had slow machines.

If you feel that D@H does not reward you for your commitment, please feel free to donate your idle cycles to other projects. D@H is one of several projects that are looking at important scientific issues with the help of the public. This is marvelous and unique!

We, the D@H, team are committed to the volunteer computing principle. We feel that any demonstration against a single project not only damages the work of students dedicated to their research but also (and more importantly) damages the other volunteers participating in the project and in general the volunteer computing paradigm.

We are currently double-testing the new screensaver. One of our D@H volunteers identified a problem in the visualization and we have been able to fix the issue. Now we want to make sure that the code works properly before to distribute it. I want to point out how the issue with the visualization was found by one of you and this is just great because we feel that you all are part of our team.

Once we have the new screensaver out, we will have a meeting to discuss the credit issue. Again, it is very challenging to meet all the expectations but once again we will do our best.

Thank you for your support!

Michela

















____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5115 - Posted 11 Jul 2009 17:59:21 UTC - in response to Message ID 5114 .

Maybe the project should have a look to the recent events at Aqua@home. Such things could happen here too as long as you don't fix your credits!

I don't want to demonstrate it as Alliance Francophone has done it there, but it is really easy to cheat here!


We understand that credits are important for some of our volunteers. Still the issue of the credits is an open issue. We had, for a short time, a fixed amount of credits per result but then some of our volunteers felt that they were penalized because they had slow machines.

If you really think about it you will see that the advantages by far outweigh the disadvantages (are there even ones?). Someone contributing less to the science, i.e. calculating less WUs, should also get less of that virtual reward called credits, very simple. But the real problem is the possibility to cheat.
If you solved that, you can (and should) still try to figure out why AMD powered machines are quite a bit slower than Intel ones (may have something to do with the used compilers and/or options).

If you feel that D@H does not reward you for your commitment, please feel free to donate your idle cycles to other projects.

Thanks for that advice! By the way, have you recently looked how much not only me but also my whole team is contributing here at Docking?

We are currently double-testing the new screensaver. One of our D@H volunteers identified a problem in the visualization and we have been able to fix the issue. Now we want to make sure that the code works properly before to distribute it. I want to point out how the issue with the visualization was found by one of you and this is just great because we feel that you all are part of our team.

Frankly, I think most people deactivate such stuff either way. It may be nice to have for a project, but those seeing BOINC as some kind of competition (quite a lot if you ask me) are more interested in the performance of their computers and deactivate it. And those interested in the scientific value of their donated computing power may have a short look and deactivate it as well.

To sum it up, a screensaver is a nice addon, but the credits are a basic ingredient. A project can be almost torn apart over this issue. In an ideal world one wouldn't need them, but BOINC was designed to encourage the competition between individuals as well as between teams to raise the donated computer power. And as in any competetion with a lot of people there are always some malicious guys between them trying to cheat.
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5116 - Posted 11 Jul 2009 21:14:59 UTC - in response to Message ID 5114 .
Last modified: 11 Jul 2009 21:25:28 UTC

Just a small addon.

We feel that any demonstration against a single project not only damages the work of students dedicated to their research but also (and more importantly) damages the other volunteers participating in the project and in general the volunteer computing paradigm.

I agree with that. But there is an easy prevention against such malicious action: just use fixed credits.
The benchmark approach you are using now is completely bogus and very easy to manipulate which "not only damages the work of students dedicated to their research but also (and more importantly) damages the other volunteers participating in the project and in general the volunteer computing paradigm" as you put it. You don't control the environment in wich your application runs, so you can't rely neither on the benchmark values nor the time reported for the WUs . You should really calculate the credit independent of those values. Otherwise this old system would get 280k credits a day with the reported benchmark values. I won't let it calculate any WU in that state (doesn't work either on that old Linux kernel 2.4), but I guess it shows the problem.
Total Credit 0
Avg. credit 0.00
CPU type AuthenticAMD
AMD Athlon(tm) MP 1500+ [Family 6 Model 6 Stepping 2]
Number of CPUs 2
Operating System Linux
2.4.20-64GB-SMP
Memory 1008.22 MB
Cache 256 KB
Swap space 5122.25 MB
Total disk space 114.5 GB
Free Disk Space 21.65 GB
Measured floating point speed 1000000 million ops/sec
Measured integer speed 1000000 million ops/sec


By the way, Aqua put fixed credits in place within a day of the incidents due to such manipulations there. It can't be that hard ;)
Profile MrBad
Avatar

Joined: Sep 3 08
Posts: 4
ID: 615
Credit: 1,050,243
RAC: 0
Message 5117 - Posted 11 Jul 2009 22:26:20 UTC - in response to Message ID 5114 .

.

We are currently double-testing the new screensaver. One of our D@H volunteers identified a problem in the visualization and we have been able to fix the issue. Now we want to make sure that the code works properly before to distribute it. I want to point out how the issue with the visualization was found by one of you and this is just great because we feel that you all are part of our team.

Once we have the new screensaver out, we will have a meeting to discuss the credit issue. Again, it is very challenging to meet all the expectations but once again we will do our best.

Thank you for your support!

Michela


Whoohooo...

The Brain-Power of: University of Delaware, The Scripps Research Institute,and the University of California - Berkeley creates a new Screenserver!

Thats great!

iwanthimiwanthimiwanthim....

MB ;-)
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5120 - Posted 12 Jul 2009 10:25:08 UTC

I just want to clarify that I'm not this guy . He/she/it had obviously less ojections than me against a small demonstration. Just look at this task

Task ID 6386453
Name 1yqj_mod0014p38alpha_28381_449956_2
Workunit 5860272
Created 11 Jul 2009 18:32:21 UTC
Sent 11 Jul 2009 18:41:12 UTC
Received 12 Jul 2009 5:40:07 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x0)
Computer ID 38017
Report deadline 22 Jul 2009 6:01:12 UTC
CPU time 29938.21
stderr out <core_client_version>6.6.23</core_client_version>
<![CDATA[
<stderr_txt>
Calling BOINC init.
Starting charmm run (initial or from checkpoint)...
Calling BOINC init.
Starting charmm run (initial or from checkpoint)...
SUCCESS - Charmm exited with code 0.
Resolving file charmm.out...
Calling BOINC finish.
called boinc_finish

</stderr_txt>
]]>

Validate state Valid
Claimed credit 210249.324966669
Granted credit 210249.324966669

application version 6.15


Guess the problem I spoke of is obvious.
test

Joined: Jan 1 70
Posts: 1
ID: 15388
Credit: 0
RAC: 0
Message 5121 - Posted 12 Jul 2009 10:49:24 UTC

You've got a point, Gipsel ;)

Profile [B^S] BOINC-SG
Volunteer tester
Avatar

Joined: Oct 2 06
Posts: 17
ID: 136
Credit: 52,985
RAC: 0
Message 5122 - Posted 12 Jul 2009 10:57:54 UTC - in response to Message ID 5117 .

.

We are currently double-testing the new screensaver. One of our D@H volunteers identified a problem in the visualization and we have been able to fix the issue. Now we want to make sure that the code works properly before to distribute it. I want to point out how the issue with the visualization was found by one of you and this is just great because we feel that you all are part of our team.

Once we have the new screensaver out, we will have a meeting to discuss the credit issue. Again, it is very challenging to meet all the expectations but once again we will do our best.

Thank you for your support!

Michela


Whoohooo...

The Brain-Power of: University of Delaware, The Scripps Research Institute,and the University of California - Berkeley creates a new Screenserver!

Thats great!

iwanthimiwanthimiwanthim....

MB ;-)



ME TOO! *LMAO*

You guys must be really bored to invest your precious time in programming a lousy screensaver!
____________


My NEW BOINC-Site

Why people joined BOINC Synergy...
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5123 - Posted 12 Jul 2009 13:04:14 UTC - in response to Message ID 5121 .

You've got a point, Gipsel ;)

Unfortunately, yes. I wanted to avoid the current situation, but I guess your action will raise the pressure a bit.

But I'm glad you created a new account and a new team for this, so you don't mess up all the statistics. The cleanup will be a lot easier if the issue is isolated to a single user. The mess created by Alliance Francophone over at Aqua is quite bad in my opinion.
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5124 - Posted 12 Jul 2009 16:04:37 UTC

So what is happening now?
I see the project has deleted that _[Docker]_ account and some posts referring to it, too. Good, but are you doing something to solve the problem at its root?

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5125 - Posted 12 Jul 2009 16:13:26 UTC
Last modified: 12 Jul 2009 16:33:56 UTC

Double post.

Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 5127 - Posted 12 Jul 2009 16:20:32 UTC - in response to Message ID 5125 .

So what is happening now?
I see the project has deleted that _[Docker]_ account and some posts referring to it, too. Good, but what are you doing to solve the problem at its root?


We are working on a solution for the credit problem. We provide more detail as soon as we have found a good fix.

Michela
____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!
Profile L@MiR
Avatar

Joined: Dec 7 08
Posts: 3
ID: 4510
Credit: 618,946
RAC: 0
Message 5129 - Posted 12 Jul 2009 17:39:19 UTC
Last modified: 12 Jul 2009 17:41:11 UTC

Geht doch.. Dark Gipsel...

http://docking.cis.udel.edu/community/forum/thread.php?id=447

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5132 - Posted 12 Jul 2009 19:02:24 UTC - in response to Message ID 5129 .

Geht doch.. Dark Gipsel...

Ja, jetzt. Zwischendurch war der Thread mal nach Message 5120 abgeschnitten und neue Posts versteckt. Aber jetzt ist alles wieder da.
[AF>EDLS>Biomed] tibidao

Joined: Sep 3 08
Posts: 1
ID: 607
Credit: 71,152
RAC: 0
Message 5133 - Posted 13 Jul 2009 0:44:20 UTC - in response to Message ID 5123 .
Last modified: 13 Jul 2009 0:44:50 UTC


The mess created by Alliance Francophone over at Aqua is quite bad in my opinion.


It's not the mess created by ALL the Alliance...
First it was only two of them, second it was created in order to show that there was a very big problem with the points, don't forget this please.
But everybody prefer scream about them instead of screaming on all the cheaters which were doing their BIG credits silently......
(sorry if my english is not perfect ;) ).
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5134 - Posted 13 Jul 2009 1:00:37 UTC - in response to Message ID 5133 .

The mess created by Alliance Francophone over at Aqua is quite bad in my opinion.

It's not the mess created by ALL the Alliance...
First it was only two of them, second it was created in order to show that there was a very big problem with the points, don't forget this please.
But everybody prefer scream about them instead of screaming on all the cheaters which were doing their BIG credits silently......

Okay, it were two members of Alliance Francophone who created the mess. Better?
And afaik they claimed on the Aqua board the issue was openly discussed in your forum before.

But what I was referring to is that they used their normal accounts for this. Here, that _[Docker]_ guy created an account and a team specifically for this purpose. Instead, your two colleages have chosen to take the credits to their personal accounts and also on account for AF. They mixed it up. That is what I call a mess.
Profile TheFiend

Joined: Apr 7 09
Posts: 70
ID: 9482
Credit: 20,705,527
RAC: 0
Message 5135 - Posted 13 Jul 2009 7:24:00 UTC

This is pathetic.... somebody is upset about the amount of credit so they take to hacking results.... Childish!!! :wall:

Docking is not about credit for crunching...... it's about the science.

If this hacking with credit continues then I will probably say goodbye to Docking.

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5136 - Posted 13 Jul 2009 9:34:48 UTC - in response to Message ID 5135 .
Last modified: 13 Jul 2009 9:35:55 UTC

This is pathetic.... somebody is upset about the amount of credit so they take to hacking results.... Childish!!! :wall:

Docking is not about credit for crunching...... it's about the science.

If this hacking with credit continues then I will probably say goodbye to Docking.

Don't be that fast with your judgement. That was not about the credit level. It was aimed to prove a severe hole in the current credit system. This flaw existed all the time and I'm sure a few even used it to gain credits in an unfair way.
I agree this is a valid reason to leave a project. But Docking already works on a solution which will probably literally fixing the issue, as this is the easiest and safest way to prevent cheating. The introduction of a quorum would reduce the scientific output of the project and will only limit the "effectiveness" of these cheats but not eliminate it all together.
Profile Beyond

Joined: Feb 9 09
Posts: 8
ID: 6984
Credit: 3,132,056
RAC: 0
Message 5137 - Posted 13 Jul 2009 16:13:03 UTC

I'd vote for fixed, server based credit. The BOINC benchmarking system is useless, as it unreasonably favors certain processors and OSes. It also encourages various cheats and hacks. Server based credits preclude most of the problems and will make your lives (and ours) much more peaceful.

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5138 - Posted 13 Jul 2009 16:35:10 UTC - in response to Message ID 5137 .

I'd vote for fixed, server based credit. The BOINC benchmarking system is useless, as it unreasonably favors certain processors and OSes. It also encourages various cheats and hacks. Server based credits preclude most of the problems and will make your lives (and ours) much more peaceful.

Exactly!
Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 5139 - Posted 13 Jul 2009 18:35:11 UTC - in response to Message ID 5138 .

What are the pros of variable credits? Three reasons why fix credits will not always be the right way to assign credits in Docking@Home

1. We hare running simulations with different proteins and ligands: each complex has a different size in terms of number of atoms and this can result in variable lengths for the jobs when moving from one complex to another.

2. We are running different docking models: two algorithms are used for the docking simulations, each with different characteristics and lengths in terms of flops and time. Model 13 is shorter than model 14 because it uses a different representation of the solvent.

3. Last and most important, each job has a non-deterministic lengths (the non-determinism is intrinsic in the molecular dynamics simulation performed): we set up a certain number of random conformations per job for our ligand and for each conformation we set up a certain number of rotations, however, if during the docking simulation there is an energy violation, the simulation is terminated. The volunteer gets the credits for the computer work done to that point and no penalty is applied. The volunteer can just proceed with the simulation of the next job. We use the simulation results before the violation happens - nothing is wasted! We cannot predict in advance the energy violation but it is better to stop the jobs that are causing the violation rather than continuing them.

If the third condition happens, the assignment of fix credits is not fair. However, the client starts immediately to crunch a new job and to get new credits.


How are we now preventing volunteers to get 1M credits per job? We changed the validation daemons so that we do not assign credits if:

1. suddenly a client asks higher amount of credits than in the past
2. clients ask more than expected for similar machines. In other words, each job can get a maximum amount of credits.

If any of the conditions above happens the jobs is classified as invalid.


What is our next step? We will put a survey open for the next 20 days to collect your votes and set up the credit system accordantly.



anthonmg

Joined: Apr 11 09
Posts: 64
ID: 9657
Credit: 17,959,472
RAC: 0
Message 5140 - Posted 13 Jul 2009 20:41:58 UTC - in response to Message ID 5135 .



Docking is not about credit for crunching...... it's about the science.

If this hacking with credit continues then I will probably say goodbye to Docking.


If the docking is about science and not credit, then why would you leave the science because people are messing with the credit. This is how a few bad apples spoil the bunch, by reacting to them. Let your computer do its honest work and keep contributing to this valuable project. If people are messing with the results the scientists will know and deal with the results appropriately, and it looks like they're adding some measures to protect the credit system. Yeah, it's fun to see yourself rise up and down in the ranks, but more satisfying to see quality results and publications from the aggregations of all these data.

anthonmg

Joined: Apr 11 09
Posts: 64
ID: 9657
Credit: 17,959,472
RAC: 0
Message 5141 - Posted 13 Jul 2009 20:47:07 UTC - in response to Message ID 5139 .

I know y'all are more experienced at this than I, but I"m wondering if there's a server-side way of determining from the uplaoded results how many cycles were carried out in the particular work unit? It totally makes sense that they're not deterministic and you can't know exactly how long or how much math each one will required, but in the final analysis when you have the data back, are there ways to figure out how many computations were used to generate the returned results? (refinement cycles * atoms * model 13 or 14 correction * credit/flop) or something like that? My thought then is that a failed work unit might still actually generate some credit since the machine expended effort on that. I had a lot of failed work units at some point that went on for hours without terminiating, or dropped after a few hours, but processor time was used and the RAC dropped a lot (takes almost a month to recover). This would also be harder to mess with. You'd be able to use some similar metrics about time to completion vs. similar systems, if more cycles are reported than can be reasonably done in the time the unit ran, etc. Just a thought,
Tony

JKuehl2

Joined: Jul 2 09
Posts: 3
ID: 14773
Credit: 277,202
RAC: 0
Message 5144 - Posted 14 Jul 2009 8:20:02 UTC

If you had access to the source-code and could implement a counter or something like that, this would be possible with no problem (to at least check how much work was done in a asymptotic manner).

BUT: the problem seems to be, that docking runs a wrapper around a wrapper (around a wrapper?) as the CHARMM model, program and force-field must be licensed in order to use it and no changes to the source-code are allowed for third parties.

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5146 - Posted 14 Jul 2009 15:32:17 UTC - in response to Message ID 5139 .
Last modified: 14 Jul 2009 16:07:54 UTC

What are the pros of variable credits? Three reasons why fix credits will not always be the right way to assign credits in Docking@Home

If you want I can give much more than three reasons why fixed credits are fundamentally better than benchmark based ones. But first let me answer to your reasoning and why I don't agree.

1. We hare running simulations with different proteins and ligands: each complex has a different size in terms of number of atoms and this can result in variable lengths for the jobs when moving from one complex to another.

Each complex already belongs to a different WU series. You can easily assign different credit values to different series. That's no argument in my eyes.

2. We are running different docking models: two algorithms are used for the docking simulations, each with different characteristics and lengths in terms of flops and time. Model 13 is shorter than model 14 because it uses a different representation of the solvent.

Same answer as for point 1. I'm very sure you test the input for a new WU series
with a new molecule or a new docking model locally before you distribute it to the participants. Otherwise it would be wantonly negligent as one can't rule out a human error (e.g. just switching two values, selecting a wrong model, whatever). So you have already now information about the runtime for all molecule/model combinations in a controlled environment. Just use this information you have already at your hand to determine the fixed credit for each WU series! Virtually no increased effort from your side (opposed to the system you just proposed) and no drawbacks to the participants (I will come to this point later).

3. Last and most important, each job has a non-deterministic lengths (the non-determinism is intrinsic in the molecular dynamics simulation performed): we set up a certain number of random conformations per job for our ligand and for each conformation we set up a certain number of rotations, however, if during the docking simulation there is an energy violation, the simulation is terminated. The volunteer gets the credits for the computer work done to that point and no penalty is applied. The volunteer can just proceed with the simulation of the next job. We use the simulation results before the violation happens - nothing is wasted! We cannot predict in advance the energy violation but it is better to stop the jobs that are causing the violation rather than continuing them.


Frankly, these cases appear to be quite rare. The execution times within a series are very uniform. And even if an energy violation is detected and the WU ends early, I'm sure you will know about it from the output file (you should!) and can grant the credits proportional to the normal value.
Even if the WU length would be non-deterministic, you can still grant fixed credits as longs as the average is okay (look at POEM for example).

How are we now preventing volunteers to get 1M credits per job? We changed the validation daemons so that we do not assign credits if:

1. suddenly a client asks higher amount of credits than in the past
2. clients ask more than expected for similar machines. In other words, each job can get a maximum amount of credits.

If any of the conditions above happens the jobs is classified as invalid.

1. So it would be okay to gradually increase the amount of cheating?
2. That means one has to be clever and invent a CPU name noone else is using? What about overclocked machines? If you are able to define a maximum credit value for a WU series, why it is impossible to just define an appropriate value everyone simply gets for a WU without further ado?

Just from the top of my head there are a lot of reasons why this would still be worse than simply fixed (i.e. determined on server side independent of reported runtime or benchmark figures) credits.

First of all a very fundamental one. As I said already, you can't control the environment your app is running in. You can't rely on any information that comes back. That includes not only the benchmark values but also the CPU name, OS and so on (as demonstrated by _[Docker]_). Frankly, you should also add some kind of plausibility check (just guessing there is none in place) to the results, as one can even tinker with the WUs itself (look like interpreted scripts).

Generally there are a lot of problems with the BOINC benchmarks even if one does not manipulate them. The benchmark values vary a lot when comparing different BOINC versions and/or a different OS. As an example look at this computer running Client 5.10.45 under Linux and this completely identical machine just with WinXP. The Linux hosts registers benchmark values of only
Measured floating point speed 747.84 million ops/sec
Measured integer speed 1382.91 million ops/sec
while under Windows I see
Measured floating point speed 1335.22 million ops/sec
Measured integer speed 2249.84 million ops/sec

Quite a difference I think.

Furthermore there are sometimes severe problems with CPUs capable of dynamically changing their clockspeed. That are virtually all Notebook CPUs. But you may know that AMD CPUs downclock itself under light load to 800MHz or 1GHz, while under full load they may run at more than 3GHz. I've seen several systems where the benchmark caused too less load to "wake up" the CPU to the full clockspeed, whereas the WUs run at full throttle afterwards. This leads to severly underclaiming hosts and if the benchmark values get (correctly) recalculated at some point (there is a random componont to this problem) they would get their WUs marked as invalid (because of claiming much more than before) with your proposed system. Another problem is that there may be heavy (non-BOINC) activity on the system when the Benchmark is executed. This will also lead to severly reduced scores.

But also overclaiming benchmarks are entirely possible without willfully manipulating something. Just think of the new Core i7 series and its "Turbo" feature. If only one core is loaded and/or the CPU temperature is low it raises the frequency of the loaded core(s). This leads easily to benchmark scores (which partly runs only single threaded!) not representative of the actual crunching speed. The hyperthreading feature makes this even worse actually. I think I gave already that example of 88 credits claimed by a Phenom or Core2 and 124 credits claimed by a Core i7 for the same WU. And this problem will continue to get more pronounced as the CPU manufactures will implement further features helping the crunching speed but not the benchmark score or increase the functionality of such automated load and temperature dependend clocking schemes for CPUs like Cool&Quiet, SpeedStep or that Turbo feature.

All in all, even if the benchmark could not be manipulated, it still lacks the property to represent the crunching power of a system. So why on earth do you want to base the credits on it?
With the provisions you have taken you try the repair a concept that is literally fubar. Maybe you should ask yourself why most projects (especially the bigger ones) use fixed credits. The simple answer is that it is probably the easiest and safest way.
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5147 - Posted 14 Jul 2009 15:37:18 UTC - in response to Message ID 5144 .
Last modified: 14 Jul 2009 15:52:36 UTC

If you had access to the source-code and could implement a counter or something like that, this would be possible with no problem (to at least check how much work was done in a asymptotic manner).

BUT: the problem seems to be, that docking runs a wrapper around a wrapper (around a wrapper?) as the CHARMM model, program and force-field must be licensed in order to use it and no changes to the source-code are allowed for third parties.

AFAIK, you get the CHARMM source code with your license. After all you have to compile it for the different platforms. But if one is allowed to modify it is beyond my knowledge.
Maybe you are right it is not allowed. Otherwise I can't imagine why we still have no 64Bit binaries ;)

I really think it is not necessary to do a real determination of the work done per WU. A characterization is completely sufficient. This can be easily done without access to the source.
Profile Scientific Frontline
Avatar

Joined: Mar 25 09
Posts: 42
ID: 8725
Credit: 788,015
RAC: 0
Message 5148 - Posted 14 Jul 2009 20:10:50 UTC
Last modified: 14 Jul 2009 20:11:42 UTC

Applaud your approach to help resolve.
I myself have found your explanation informative and on the mark.
I would highly suggest you (docking admin) maintain this discussion with him since it is done intelligently. Would also suggest that someone from your team try to contact via e-mail and work in a more cooperative manor.

On the boards is not promoting a positive image for the project!

But both parties must be willing not to just present and debate, but listen and consider.
Heidi-Ann Kennedy
____________

Recognized by the Carnegie Institute of Science . Washington D.C.

Profile Beyond

Joined: Feb 9 09
Posts: 8
ID: 6984
Credit: 3,132,056
RAC: 0
Message 5151 - Posted 15 Jul 2009 13:47:05 UTC - in response to Message ID 5146 .

[i]Generally there are a lot of problems with the BOINC benchmarks even if one does not manipulate them. The benchmark values vary a lot when comparing different BOINC versions and/or a different OS. As an example look at this computer running Client 5.10.45 under Linux and this completely identical machine just with WinXP. The Linux hosts registers benchmark values of only[quote]Measured floating point speed 747.84 million ops/sec
Measured integer speed 1382.91 million ops/sec
while under Windows I see
Measured floating point speed 1335.22 million ops/sec
Measured integer speed 2249.84 million ops/sec

Quite a difference I think.

Furthermore there are sometimes severe problems with CPUs capable of dynamically changing their clockspeed. That are virtually all Notebook CPUs. But you may know that AMD CPUs downclock itself under light load to 800MHz or 1GHz, while under full load they may run at more than 3GHz. I've seen several systems where the benchmark caused too less load to "wake up" the CPU to the full clockspeed, whereas the WUs run at full throttle afterwards. This leads to severly underclaiming hosts and if the benchmark values get (correctly) recalculated at some point (there is a random componont to this problem) they would get their WUs marked as invalid (because of claiming much more than before) with your proposed system. Another problem is that there may be heavy (non-BOINC) activity on the system when the Benchmark is executed. This will also lead to severly reduced scores.

But also overclaiming benchmarks are entirely possible without willfully manipulating something. Just think of the new Core i7 series and its "Turbo" feature. If only one core is loaded and/or the CPU temperature is low it raises the frequency of the loaded core(s). This leads easily to benchmark scores (which partly runs only single threaded!) not representative of the actual crunching speed. The hyperthreading feature makes this even worse actually. I think I gave already that example of 88 credits claimed by a Phenom or Core2 and 124 credits claimed by a Core i7 for the same WU. And this problem will continue to get more pronounced as the CPU manufactures will implement further features helping the crunching speed but not the benchmark score or increase the functionality of such automated load and temperature dependend clocking schemes for CPUs like Cool&Quiet, SpeedStep or that Turbo feature.

All in all, even if the benchmark could not be manipulated, it still lacks the property to represent the crunching power of a system. So why on earth do you want to base the credits on it?
With the provisions you have taken you try the repair a concept that is literally fubar. Maybe you should ask yourself why most projects (especially the bigger ones) use fixed credits. The simple answer is that it is probably the easiest and safest way.

I totally agree. The BOINC benchmarking system has always been a mess and the introduction of the newer processors has pretty much rendered it invalid. The i7 overclaims so badly it's ridiculous. The BOINC developers have done nothing to fix the problem, which makes server assigned credit the only system that is currently at all equitable.

Profile Beyond

Joined: Feb 9 09
Posts: 8
ID: 6984
Credit: 3,132,056
RAC: 0
Message 5152 - Posted 15 Jul 2009 13:48:58 UTC - in response to Message ID 5139 .

What is our next step? We will put a survey open for the next 20 days to collect your votes and set up the credit system accordantly.

Has the survey been posted?

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5153 - Posted 15 Jul 2009 14:14:17 UTC - in response to Message ID 5152 .

What is our next step? We will put a survey open for the next 20 days to collect your votes and set up the credit system accordantly.

Has the survey been posted?

If not, we can vote by acclamation ;)

Every participant expressing a clear opinion so far favored server assigned (fixed) credits.

Anyone against it?
anthonmg

Joined: Apr 11 09
Posts: 64
ID: 9657
Credit: 17,959,472
RAC: 0
Message 5155 - Posted 15 Jul 2009 21:25:43 UTC - in response to Message ID 5153 .

I'm also for server assigned credits. Something is definitately up with the current system. I just went through some of my workunits and found that different computers are getting different amounts of credit for the same work. My laptop cranks out a workunit in about the same time as my workstation (similar speed cores, the workstation just has more of them). They complete a similarly sized workunit in about the same time, but the laptop is receiving 1/4 the credit of the workstation. Very weird.

Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5157 - Posted 15 Jul 2009 23:33:10 UTC - in response to Message ID 5155 .
Last modified: 15 Jul 2009 23:34:14 UTC

They complete a similarly sized workunit in about the same time, but the laptop is receiving 1/4 the credit of the workstation. Very weird.
That is the problem with the changing clockspeed during the benchmark I spoke of. One of the reasons why benchmark based credits are simply crap.
anthonmg

Joined: Apr 11 09
Posts: 64
ID: 9657
Credit: 17,959,472
RAC: 0
Message 5158 - Posted 15 Jul 2009 23:50:38 UTC - in response to Message ID 5157 .

They complete a similarly sized workunit in about the same time, but the laptop is receiving 1/4 the credit of the workstation. Very weird.
That is the problem with the changing clockspeed during the benchmark I spoke of. One of the reasons why benchmark based credits are simply crap.


Ah, that would totally explain it:

Going back through work units just on the laptop I see that there's been a bunch of them lately taking 18000 seconds to complete (the p38 work units). They were usually garnering about 87 credits/work unit, consistent with my other computers as well, but recently the laptop has only been getting 25 credit/workunit even though the time to completion didn't change.

This would explain why the RAC for the laptop has been steadily falling for the last few days even though it's been running full tilt. No wonder my rank is slipping even though I didn't change anything. Sigh.
Profile Beyond

Joined: Feb 9 09
Posts: 8
ID: 6984
Credit: 3,132,056
RAC: 0
Message 5162 - Posted 16 Jul 2009 1:52:05 UTC

Server based credit all the way :-)
The BOINC benchmark system is worthless.

anthonmg

Joined: Apr 11 09
Posts: 64
ID: 9657
Credit: 17,959,472
RAC: 0
Message 5163 - Posted 16 Jul 2009 7:09:08 UTC - in response to Message ID 5162 .

Server based credit all the way :-)
The BOINC benchmark system is worthless.


Yup, that was totally it actually. I just ran it down from the logs. The problem: Laptop is getting only 1/4 of the credit per work unit that it had, or of computers of similar configuration. Problem started recently.

I went through the logs. I took my laptop into a meeting to take some notes. While there it did one of its random benchmark tests that seem to occur once in a while. I leave BOINC running as the meetings are only an hour or two and we have this extended battery pack thing on our computers that gives them 6 hours of additional life to the main battery pack.

However, off line-current the machine slows down the processor. I just ran benchmarks in both modes and it's about 1/4 or more slower when disconnected from power. It hadn't done a benchmark update SINCE then and so hasn't noticed that the processors are back to full speed. GAH, all these days of full-speed workunits for 1/4 credit. I TOTALLY vote for server-assigned credits.
Tony
Kevint

Joined: Jun 26 08
Posts: 10
ID: 389
Credit: 2,724,494
RAC: 0
Message 5180 - Posted 17 Jul 2009 15:19:22 UTC
Last modified: 17 Jul 2009 15:22:28 UTC

Could the admin please delete this host, and remove entire credit of the user.

36565

Obviously this is someone that cares nothing about the science and only about generating credit.


GenuineIntel
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz [x86 Family 6 Model 15 Stepping 11]
Number of CPUs 4
Operating System Microsoft Windows Server 2003
Standard Server x86 Edition, Service Pack 2, (05.02.3790.00)
Memory 2038.06 MB
Cache 244.14 KB
Measured floating point speed 8648.2 million ops/sec
Measured integer speed 18731.66 million ops/sec


And

36563

GenuineIntel
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz [x86 Family 6 Model 15 Stepping 11]
Number of CPUs 4
Operating System Microsoft Windows Server 2003
Standard Server x86 Edition, Service Pack 2, (05.02.3790.00)
Memory 2037.38 MB
Cache 244.14 KB
Measured floating point speed 6490.6 million ops/sec
Measured integer speed 8604.28 million ops/sec

Profile [B^S] BOINC-SG
Volunteer tester
Avatar

Joined: Oct 2 06
Posts: 17
ID: 136
Credit: 52,985
RAC: 0
Message 5189 - Posted 17 Jul 2009 20:34:29 UTC

Pretest a new batch of WUs. Set a fixed amount of credits for it.

Even if there are a few WUs, that might finish earlier (or last longer?): All in all, it will be just as fair as possible.

Cosmology also had fixed credits (was it 50 per WU?) with varying wu length, but every user has the chance to get a shorter wu for the same amount of credits - instead of a few cheaters taking advantage of this crappy benchmark based credit system...

Best regards!
____________


My NEW BOINC-Site

Why people joined BOINC Synergy...

koschi

Joined: Jul 3 09
Posts: 3
ID: 14817
Credit: 338,558
RAC: 0
Message 5191 - Posted 17 Jul 2009 22:53:37 UTC

Same for Einstein, within one science run the time needed to complete one unit is differing up to 30%.
Over some more work units it averages out for everyone...

server side credits + 1

koschi

Joined: Jul 3 09
Posts: 3
ID: 14817
Credit: 338,558
RAC: 0
Message 5192 - Posted 17 Jul 2009 22:59:17 UTC
Last modified: 17 Jul 2009 23:08:58 UTC

Duplicate post, kindly remove it ;-)

JKuehl2

Joined: Jul 2 09
Posts: 3
ID: 14773
Credit: 277,202
RAC: 0
Message 5203 - Posted 18 Jul 2009 9:01:35 UTC
Last modified: 18 Jul 2009 9:02:04 UTC

+1 from me for fixed credits.

btw: anyone accounted for the +39 votes from phoenix rising "for server-side credits" from this thread / post? http://docking.cis.udel.edu/community/forum/thread.php?id=448&nowrap=true#5197

MacRuh

Joined: Feb 7 09
Posts: 1
ID: 6907
Credit: 36,900
RAC: 0
Message 5205 - Posted 18 Jul 2009 9:27:51 UTC

fixed credits please!

Profile MrBad
Avatar

Joined: Sep 3 08
Posts: 4
ID: 615
Credit: 1,050,243
RAC: 0
Message 5206 - Posted 18 Jul 2009 9:37:15 UTC

+ 1 from me.

Profile Sylvester@Planet 3Dnow!

Joined: Jun 1 09
Posts: 1
ID: 12486
Credit: 91,707,092
RAC: 0
Message 5207 - Posted 18 Jul 2009 9:37:53 UTC

I think, the onliest possibility to solve the credit problem is fixed credit

elite.bl4ze

Joined: Jul 5 09
Posts: 3
ID: 14933
Credit: 769,782
RAC: 0
Message 5208 - Posted 18 Jul 2009 9:42:50 UTC
Last modified: 18 Jul 2009 9:44:32 UTC

fixed Credits rock!:P

fixed credits +1

greets

elite.bl4ze

wintermute_P3DN!

Joined: Jun 6 09
Posts: 1
ID: 12731
Credit: 250,442
RAC: 0
Message 5209 - Posted 18 Jul 2009 9:45:23 UTC - in response to Message ID 5207 .

I think, the onliest possibility to solve the credit problem is fixed credit

Please fixed Credits for me!
(and a 64-Bit-Version would be nice)
erde-m@Planet 3DNow!

Joined: Jul 3 09
Posts: 1
ID: 14818
Credit: 4,002,839
RAC: 0
Message 5210 - Posted 18 Jul 2009 9:52:48 UTC

+1 from me for fixed credits.

Please fixed Credits for me! 64-Bit-Version would be nice

Greets

sandro

Joined: Sep 3 08
Posts: 4
ID: 512
Credit: 4,076,636
RAC: 0
Message 5211 - Posted 18 Jul 2009 9:55:07 UTC - in response to Message ID 5210 .

+1 from me for fixed credits.

[MTB]JackTheRipper@Planet 3DNow!

Joined: Jul 5 09
Posts: 1
ID: 14969
Credit: 1,522,841
RAC: 0
Message 5212 - Posted 18 Jul 2009 10:02:18 UTC

My vote for fixed credits!

_[Docker]_

Joined: Jul 12 09
Posts: 1
ID: 15446
Credit: 88
RAC: 0
Message 5213 - Posted 18 Jul 2009 10:07:29 UTC

My vote for fixed credits!

neletma

Joined: Jul 2 09
Posts: 1
ID: 14775
Credit: 21,321
RAC: 0
Message 5214 - Posted 18 Jul 2009 10:25:53 UTC

my vote for fixed

Caipi

Joined: Jul 5 09
Posts: 1
ID: 14947
Credit: 1,502,986
RAC: 0
Message 5215 - Posted 18 Jul 2009 10:28:23 UTC

vote for fixed credits!

Profile L@MiR
Avatar

Joined: Dec 7 08
Posts: 3
ID: 4510
Credit: 618,946
RAC: 0
Message 5216 - Posted 18 Jul 2009 10:30:09 UTC
Last modified: 18 Jul 2009 10:32:07 UTC

Yes, we can. <- is like the choice between plague and cholera.

Fixed credits is the way!

I see just -[Docker]- has 88 credits for a WU get ... Whether that's politically correct? The good new Germany is 88 for /:=) and prohibited! Allow me a joke.

Thank you for attention.

Mente

Joined: Jul 10 09
Posts: 1
ID: 15300
Credit: 124,480
RAC: 0
Message 5217 - Posted 18 Jul 2009 10:39:59 UTC

+1 from me for fixed credits.

Please fixed Credits for me! 64-Bit-Version would be nice

Greets

Profile Sabroe_SMC

Joined: Jul 2 09
Posts: 1
ID: 14782
Credit: 5,425,124
RAC: 0
Message 5218 - Posted 18 Jul 2009 10:46:41 UTC

Fixed Credits is the only chance for D@H to avoid cheating.

Profile [SG]SDI

Joined: Aug 27 08
Posts: 1
ID: 401
Credit: 298,145
RAC: 0
Message 5219 - Posted 18 Jul 2009 11:15:27 UTC - in response to Message ID 5218 .

In fact, all WU running the same time (+-10%). Set fixed credits for this and no Docker-Account made trouble...

Pjack

Joined: Jul 2 09
Posts: 1
ID: 14778
Credit: 101,970
RAC: 0
Message 5220 - Posted 18 Jul 2009 12:29:46 UTC

[X]fixed Credits

camo@Planet 3DNow!

Joined: Jul 5 09
Posts: 2
ID: 14972
Credit: 819,893
RAC: 0
Message 5221 - Posted 18 Jul 2009 12:32:32 UTC - in response to Message ID 5219 .

+1 for fixed credits please!

Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 5222 - Posted 18 Jul 2009 13:07:47 UTC - in response to Message ID 5221 .

We are testing D@H with fix credits.

Michela

____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!

Fred Verster
Avatar

Joined: May 8 09
Posts: 26
ID: 11034
Credit: 2,647,353
RAC: 0
Message 5223 - Posted 18 Jul 2009 13:31:08 UTC
Last modified: 18 Jul 2009 14:02:50 UTC

Hi, quite NEW and the first post.

Benchmark only, without taking time into account, simply sucks.

Most of the WU's I've seen, have within a few %, the same amount of data to be crunched. But this can change, ofcoarse.
Fixed credits, IMHO, will give less trouble, compaired to a FLOP's and time based
method.
But I'm still NOT convinced, what is the most reasonable and
fair method in credit approval!

Does Docking use a quorum based checking and/or validating?
What other methods are or have being used?
And I have some reading to do. ;)

cumec

Joined: Jul 5 09
Posts: 1
ID: 14964
Credit: 21,806
RAC: 0
Message 5224 - Posted 18 Jul 2009 15:08:30 UTC

[x] fixed credits

Profile Scientific Frontline
Avatar

Joined: Mar 25 09
Posts: 42
ID: 8725
Credit: 788,015
RAC: 0
Message 5225 - Posted 18 Jul 2009 16:46:25 UTC - in response to Message ID 5222 .
Last modified: 18 Jul 2009 16:56:41 UTC

We are testing D@H with fix credits.

Michela

Nice to hear Michela!
As mentioned elsewhere
39 votes for fixed credits from Phoenix Rising.
Really though, WU's lengths are always pretty close.
Just one figure would work for most of us.
Sometime we get a little extra, and sometimes a little less for our time.
Yet should workout in the long haul.

Bet you never thought crunchers could be such a pain in the butt ~smiles~
Take care,
Heidi-Ann
____________

Recognized by the Carnegie Institute of Science . Washington D.C.
Profile aendgraend

Joined: Aug 26 08
Posts: 1
ID: 397
Credit: 155,365
RAC: 0
Message 5227 - Posted 18 Jul 2009 20:11:51 UTC

Fixed Credits - will make my Day. Signed!

(Thanks Gipsel for the Heads up ;-) )

Opethbass

Joined: Jul 2 09
Posts: 1
ID: 14765
Credit: 17,097
RAC: 0
Message 5229 - Posted 18 Jul 2009 23:00:50 UTC

[x] fixed Credits.
You crunch an WU and know, what you get for it. Simple and clear. And noone will be upset, that his old computer gets less Credit, that is rasy to understand

jcworks

Joined: Jul 9 09
Posts: 7
ID: 15212
Credit: 20,164,099
RAC: 0
Message 5232 - Posted 19 Jul 2009 6:48:30 UTC

[x] fixed credits, of course!

No one is wondering that a new 300 hp-car is faster than an old 50 hp-car... why should that be different with computers?


off topic:
ok, in the (most?) US-States there is a speedlimit ;) come to Germany and try it without speedlimit :) it's fun!
I thought, there is a US-State without a speedlimit, but I couldn't find one. I'm just prying: Is there one?

Profile Scientific Frontline
Avatar

Joined: Mar 25 09
Posts: 42
ID: 8725
Credit: 788,015
RAC: 0
Message 5234 - Posted 19 Jul 2009 14:07:14 UTC - in response to Message ID 5232 .

[x] fixed credits, of course!

No one is wondering that a new 300 hp-car is faster than an old 50 hp-car... why should that be different with computers?


off topic:
ok, in the (most?) US-States there is a speedlimit ;) come to Germany and try it without speedlimit :) it's fun!
I thought, there is a US-State without a speedlimit, but I couldn't find one. I'm just prying: Is there one?


Off topic response
Montana has some highways, but I cannot confirm if they still do, yet has been about ten years since I had been on them.
____________

Recognized by the Carnegie Institute of Science . Washington D.C.
fractal

Joined: Sep 3 08
Posts: 10
ID: 563
Credit: 1,285,769
RAC: 0
Message 5235 - Posted 19 Jul 2009 18:48:54 UTC

I have been sitting back quietly crunching for Docking@home for many months. I am not one of the big heavy weights who crunch only for cobbles. I only have four quad core systems dedicated to Docking plus a few part time crunchers. After reading this thread for the third time ... it has been going on for over a year now ... several things are pretty obvious.

a. We all know that fixed credits reduce cheating in projects where batches can be adequately benchmarked.

b. We all know that docking@home credits are below industry average.

c. We all know that nothing has changed in over a year. Plenty of talk, little action.

I just reviewed a the completed work on my machines and indeed, it appears that batches are fairly consistent. The active batch is 48 cobbles. Shortly before that was an 83 cobble batch. It seems that you should have a pretty good idea how much processing a work unit should take. Fixed credits with "flag for inspection" on anyone who claims something outside the range of reasonable seems to me to be a no-brainer. If a flagged work unit comes out higher than benchmark, then reward the contributor. If not, and you see a pattern, ban them.

Finally, credits are low. You know that. I know that. Everyone knows that. The exact multiplier to conform to industry standard is somewhere near 1.5x what you are granting. You can no longer use SETI as the "gold standard" because the CUDA client is not separated, but the numbers posted over the years show a pretty clear pattern.

In summary, I am contributing to docking@home because I think it is doing good work. But, the dc nut in me feels slighted by the below average credits. The claim that you can not adjust credits to a level more consistent with the rest of the field out of fear of attracting too many cheaters does little to satisfy, especially when it seems clear that you are doing batches of fixed size units.

Profile [B^S] BOINC-SG
Volunteer tester
Avatar

Joined: Oct 2 06
Posts: 17
ID: 136
Credit: 52,985
RAC: 0
Message 5236 - Posted 20 Jul 2009 11:48:37 UTC

Anybody saw fixed credits yet?

Cheers!
____________


My NEW BOINC-Site

Why people joined BOINC Synergy...

Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 5237 - Posted 20 Jul 2009 13:59:14 UTC

We are testing fixed credits in a mirrored system. We expect to have everything ready in the next couple of days.

jcworks

Joined: Jul 9 09
Posts: 7
ID: 15212
Credit: 20,164,099
RAC: 0
Message 5238 - Posted 20 Jul 2009 16:27:11 UTC - in response to Message ID 5237 .

We are testing fixed credits in a mirrored system. We expect to have everything ready in the next couple of days.


Ah, good news and thanks for the infos. It's nice to see that you're working on it.
Profile [B^S] BOINC-SG
Volunteer tester
Avatar

Joined: Oct 2 06
Posts: 17
ID: 136
Credit: 52,985
RAC: 0
Message 5239 - Posted 20 Jul 2009 20:55:31 UTC

Thanx for the info!
____________


My NEW BOINC-Site

Why people joined BOINC Synergy...

Profile Wang Solutions
Volunteer tester
Avatar

Joined: Nov 14 06
Posts: 5
ID: 272
Credit: 5,326,180
RAC: 0
Message 5242 - Posted 21 Jul 2009 9:52:33 UTC

Delighted to hear that we will soon be getting fixed credits. It is the only way to go.
____________
Proud member of BOINC@AUSTRALIA

Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 5260 - Posted 31 Jul 2009 19:21:19 UTC
Last modified: 31 Jul 2009 19:21:34 UTC

It took more than 2 days, because of the non-deterministic nature of our tasks, but we are ready to move to fixed credits tomorrow, after we finish the distribution of 1k1i-trypsin-model14

We'll keep you posted

Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 5273 - Posted 3 Aug 2009 20:37:33 UTC

Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated

Profile Conan
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 219
ID: 100
Credit: 4,256,493
RAC: 0
Message 5275 - Posted 4 Aug 2009 12:03:18 UTC - in response to Message ID 5273 .
Last modified: 4 Aug 2009 12:05:25 UTC

Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated


Thanks to you Trilce, Michela and the rest of the Docking team for your work on this issue.
I have been waiting for a very long time for something to be done as my poor old Opterons running Linux, have been hammered credit wise, particularly by Windows machines.
I have made comments before about the low granted credit (as long as you did not change your benchmarks) so the amount being awarded at the moment is great, leave it there please.

I have now increased my preference share for Docking.

Keep up the great work,
Conan.
____________
Profile Cori
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 161
ID: 90
Credit: 5,817
RAC: 0
Message 5282 - Posted 5 Aug 2009 9:08:53 UTC

Haven't been around much lately... and now what do my tiny little eyes see?
FIXED credits!

Thank you, thank you, thank you! That's what I've been waiting for a long time!
____________
Bribe me with Lasagna!! :-)

camo@Planet 3DNow!

Joined: Jul 5 09
Posts: 2
ID: 14972
Credit: 819,893
RAC: 0
Message 5283 - Posted 5 Aug 2009 10:48:19 UTC - in response to Message ID 5273 .

Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated

Many thanks for fixed credits. :)
Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 5285 - Posted 5 Aug 2009 17:01:07 UTC

Fixed credits are great, really. Very much appreciated.

I always had the impression, that something similar was already installed, as my computer always claimed less (and got granted the same amount) as the benchmarks would have claimed, 21 C/h claim compared to 25 C/h according to benchmarks. I thought that my machine was not so well suited to this algorithm as others, and thus got less than claims based on Flop-count or whatever.

Now I claim still the same (where do these claims come from anyway?), but get more than double my benches granted. This looks like I'm exorbitantly well suited for the current algorithm (or better my puter is).

55 C/h is at the top end of credit granting projects for CPU on my machine, far more than average. As there are no wingmen, I can't compare directly with other machines how my performance is and why I get so much more then usual.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile Trilce Estrada
Forum moderator
Project administrator
Project developer
Project tester

Joined: Sep 19 06
Posts: 189
ID: 119
Credit: 1,217,236
RAC: 0
Message 5286 - Posted 5 Aug 2009 17:31:03 UTC

Hello everybody,

It's great to hear that you are pleased!! Hopefully we won't have many of those nasty workunits that take more than 13 hrs (or the story will be different)


Hi Saenger,

Claimed credits came from the benchmarks (still) but now claims are ignored and the assigned credit comes from the server (by calculating how much a workunit from a given type -protein-ligand-model- is worth). Most of the people is getting more than what they are claiming (because we remember that everybody was complaining that our project was 1.5X behind the average of the BOINC projects). I assume that in some weeks most of the machines will stabilize with the new system and then it will start making sense to compare performance again

Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 5288 - Posted 5 Aug 2009 17:55:46 UTC - in response to Message ID 5286 .
Last modified: 5 Aug 2009 17:57:43 UTC

Claimed credits came from the benchmarks (still)

My Computer has benches as follows:
Measured floating point speed 3239.93 million ops/sec
Measured integer speed 8951.05 million ops/sec

claimed credit = ([whetstone]+[dhrystone]) * wu_cpu_time_in_sec / 1728000

For my machine per hour: (3239,93 + 8951,05) * 3600 / 1728000 = 25,397875

My claims here are very consistent at 21 credits per hour since a very long time. For example this one, # 6947368 :

Official formula: (3239,93 + 8951,05) * 8267.269 / 1728000 = 58,325295737

but:

Claimed credit 48.3207651724276
Granted credit 126.673348

So I'm definitely not claiming according to my benchmarks.
I'm not getting it as well, I'm getting a little bit more than double my official claim would be.
In other benchmark based projects my claims are calculated correct, so the faulty calculation is not originated in my machine.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5289 - Posted 5 Aug 2009 23:07:36 UTC - in response to Message ID 5288 .
Last modified: 5 Aug 2009 23:09:53 UTC

Claimed credits came from the benchmarks (still)

My Computer has benches as follows:
Measured floating point speed 3239.93 million ops/sec
Measured integer speed 8951.05 million ops/sec

claimed credit = ([whetstone]+[dhrystone]) * wu_cpu_time_in_sec / 1728000

For my machine per hour: (3239,93 + 8951,05) * 3600 / 1728000 = 25,397875

My claims here are very consistent at 21 credits per hour since a very long time. [..]
So I'm definitely not claiming according to my benchmarks.

Sure you do. Look here for an explanation! How it all started ;)
Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 5291 - Posted 6 Aug 2009 7:06:44 UTC - in response to Message ID 5289 .

Sure you do. Look here for an explanation! How it all started ;)

Thanks, I hadn't read all posts, since nothing really happened with the meager credits after the initial posts here and I lost contact a bit.

How did you know this formula? is it stated anywhere? Is there any explanation for using a non-standard formula?
I understand to leave benchmarks and move to flop-count or fixed credits, but altering the orginal formula makes absolutely no sense to me if the buggy benches are still used.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
Cluster Physik

Joined: Jul 2 09
Posts: 35
ID: 14795
Credit: 16,067,012
RAC: 0
Message 5292 - Posted 6 Aug 2009 11:23:04 UTC - in response to Message ID 5291 .
Last modified: 6 Aug 2009 11:26:42 UTC

Sure you do. Look here for an explanation! How it all started ;)
How did you know this formula?
I'm smart ;)
No, seriously, hmm, of course I am, but I simply looked at the claimed credits of two different computers and their benchmark scores. If you assume the credit formula looks somehow similar to the original one (i.e. just with two weighting factors added and another divider at the end) you have enough information to solve that small set of linear equations.

is it stated anywhere?
Yes, in my post above ;) Otherwise no. But you can easily check that it's true.

Is there any explanation for using a non-standard formula?
Not really. I think the project thought we weight the benchmarks according to the assumed workload. The calculations or floating point dominated, so let's weight the FP benchmark more. As they have seen the credits were extremely low afterwards, they have altered the formaula a second time so the 1000 MFFLOP/1000 MIPS computer would get 1000 credits a week (instead of 100 a day).

I understand to leave benchmarks and move to flop-count or fixed credits, but altering the orginal formula makes absolutely no sense to me if the buggy benches are still used.
You are completely right. But that's beating a dead horse as we have fixed credits now.
Profile Saenger
Volunteer tester
Avatar

Joined: Sep 13 06
Posts: 125
ID: 79
Credit: 411,959
RAC: 0
Message 5299 - Posted 8 Aug 2009 9:54:53 UTC - in response to Message ID 5292 .

You are completely right. But that's beating a dead horse as we have fixed credits now.


Not quite dead yet, as it's still used for the calculation of the now rather worthless claims. I'd like this horse really dead ;)

But of course you're right, it's no rather unimportant. I'd like to know what numbers of credits per hour they take as their standard.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
fractal

Joined: Sep 3 08
Posts: 10
ID: 563
Credit: 1,285,769
RAC: 0
Message 5305 - Posted 9 Aug 2009 17:50:47 UTC - in response to Message ID 5273 .

Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated

Thank you, thank you THANK YOU!

Message boards : Number crunching : Cobblestones

Database Error
: The MySQL server is running with the --read-only option so it cannot execute this statement
array(3) {
  [0]=>
  array(7) {
    ["file"]=>
    string(47) "/boinc/projects/docking/html_v2/inc/db_conn.inc"
    ["line"]=>
    int(97)
    ["function"]=>
    string(8) "do_query"
    ["class"]=>
    string(6) "DbConn"
    ["object"]=>
    object(DbConn)#149 (2) {
      ["db_conn"]=>
      resource(384) of type (mysql link persistent)
      ["db_name"]=>
      string(7) "docking"
    }
    ["type"]=>
    string(2) "->"
    ["args"]=>
    array(1) {
      [0]=>
      &string(51) "update DBNAME.thread set views=views+1 where id=305"
    }
  }
  [1]=>
  array(7) {
    ["file"]=>
    string(48) "/boinc/projects/docking/html_v2/inc/forum_db.inc"
    ["line"]=>
    int(60)
    ["function"]=>
    string(6) "update"
    ["class"]=>
    string(6) "DbConn"
    ["object"]=>
    object(DbConn)#149 (2) {
      ["db_conn"]=>
      resource(384) of type (mysql link persistent)
      ["db_name"]=>
      string(7) "docking"
    }
    ["type"]=>
    string(2) "->"
    ["args"]=>
    array(3) {
      [0]=>
      object(BoincThread)#3 (16) {
        ["id"]=>
        string(3) "305"
        ["forum"]=>
        string(1) "2"
        ["owner"]=>
        string(3) "108"
        ["status"]=>
        string(1) "0"
        ["title"]=>
        string(12) "Cobblestones"
        ["timestamp"]=>
        string(10) "1249840247"
        ["views"]=>
        string(4) "7729"
        ["replies"]=>
        string(3) "143"
        ["activity"]=>
        string(19) "8.1692913894747e-85"
        ["sufferers"]=>
        string(1) "0"
        ["score"]=>
        string(1) "0"
        ["votes"]=>
        string(1) "0"
        ["create_time"]=>
        string(10) "1212769262"
        ["hidden"]=>
        string(1) "0"
        ["sticky"]=>
        string(1) "0"
        ["locked"]=>
        string(1) "0"
      }
      [1]=>
      &string(6) "thread"
      [2]=>
      &string(13) "views=views+1"
    }
  }
  [2]=>
  array(7) {
    ["file"]=>
    string(63) "/boinc/projects/docking/html_v2/user/community/forum/thread.php"
    ["line"]=>
    int(184)
    ["function"]=>
    string(6) "update"
    ["class"]=>
    string(11) "BoincThread"
    ["object"]=>
    object(BoincThread)#3 (16) {
      ["id"]=>
      string(3) "305"
      ["forum"]=>
      string(1) "2"
      ["owner"]=>
      string(3) "108"
      ["status"]=>
      string(1) "0"
      ["title"]=>
      string(12) "Cobblestones"
      ["timestamp"]=>
      string(10) "1249840247"
      ["views"]=>
      string(4) "7729"
      ["replies"]=>
      string(3) "143"
      ["activity"]=>
      string(19) "8.1692913894747e-85"
      ["sufferers"]=>
      string(1) "0"
      ["score"]=>
      string(1) "0"
      ["votes"]=>
      string(1) "0"
      ["create_time"]=>
      string(10) "1212769262"
      ["hidden"]=>
      string(1) "0"
      ["sticky"]=>
      string(1) "0"
      ["locked"]=>
      string(1) "0"
    }
    ["type"]=>
    string(2) "->"
    ["args"]=>
    array(1) {
      [0]=>
      &string(13) "views=views+1"
    }
  }
}
query: update docking.thread set views=views+1 where id=305