Cobblestones
Message boards : Number crunching : Cobblestones
Author | Message | |
---|---|---|
Obviously there are different ways to grant credit to you guys. It has been long debated on how that should be done.
|
||
ID: 4011 | Rating: 0 | rate: / | ||
Well... I will be the first to kick off here.
|
||
ID: 4013 | Rating: 0 | rate: / | ||
I'm quite with Rene.
|
||
ID: 4021 | Rating: 0 | rate: / | ||
I support the same way of applying FLOPS as the project had taken before.
|
||
ID: 4036 | Rating: 0 | rate: / | ||
> Have been away a few days.
|
||
ID: 4037 | Rating: 0 | rate: / | ||
I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation). +1 I also want to emphasize that increasing the quorum should be done only if the science requires it. Increasing the quorum purely for credit management is a horrible waste of resources. There's always a better way. ____________ Dublin, CA Team SETI.USA |
||
ID: 4039 | Rating: 0 | rate: / | ||
I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation). I agree with you both! ;-))) ____________ Bribe me with Lasagna!! :-) |
||
ID: 4041 | Rating: 0 | rate: / | ||
I'd prefer fixed or server-side calculated Credits if possible. "Possible means either all WUs have roughly the same size (for fixed credits) or a pre-known variable size (server-side calculation). I also agree with both. ____________ Wave upon wave of demented avengers march cheerfully out of obscurity into the dream. |
||
ID: 4042 | Rating: 0 | rate: / | ||
If it is not too late here are my 2 cents...
|
||
ID: 4045 | Rating: 0 | rate: / | ||
Since the WUs will have variable length, credits based on FLOPS would be my choice.
|
||
ID: 4046 | Rating: 0 | rate: / | ||
Since the WUs will have variable length, credits based on FLOPS would be my choice. I'd agree with FLOPS but when assigning the credit per flop, I'd keep in mind other resources being used. IIRC, charmm takes a lot more memory than some other projects and was very disk or OS call intensive. I'm not sure Andre ever found out what was going on with the heavy disk/OS activity. On Linux, it showed up as large amounts of CPU time spent in "System" space. ISTR that many quad core machines (Often Macs) experienced vastly increased time per WU as more cores were working on docking until it basically paralyzed the machine. That was being worked on when the project shut down for the move. It may even have been a bug that was subsequently fixed in the BOINC client. There seemed to be a massive number of calls to request the time being made to the OS. I don't recall if that was ever linked to re-reading the script that runs charmm and possible updating of the last-access time for the script file or if it turned out to be something else entirely. ____________ The views expressed are my own. Facts are subject to memory error :-) Have you read a good science fiction novel lately? |
||
ID: 4061 | Rating: 0 | rate: / | ||
I take it that the Project has caved into some sort of Credit Reduction already from looking at these Wu's on mine http://docking.cis.udel.edu/result.php?resultid=1770 = 50 Credits Per Hour reported in Feb 2008 to this one reported in Mar 2008 > http://docking.cis.udel.edu/result.php?resultid=7007 = 20 Credits Per Hour ... ??? |
||
ID: 4065 | Rating: 0 | rate: / | ||
Good point. No, I've not been able to figure out why these system calls to get the time of day on linux are made a gazillion times per run. I do think that this might cause the massive difference in runtime between linux and windows. The runtime difference issue is already on the project's to-do list, so I'll make sure that whatever notes I have on this will be passed on to the next person trying to crack this issue.
Since the WUs will have variable length, credits based on FLOPS would be my choice. ____________ D@H the greatest project in the world... a while from now! |
||
ID: 4105 | Rating: 0 | rate: / | ||
Good point. No, I've not been able to figure out why these system calls to get the time of day on linux are made a gazillion times per run. I do think that this might cause the massive difference in runtime between linux and windows. The runtime difference issue is already on the project's to-do list, so I'll make sure that whatever notes I have on this will be passed on to the next person trying to crack this issue. The problem of execution time differing between windows and linux need to be solved before we move on to fixed credit based on FLOPS. I will be working on this issue tomorrow. Andre, can you pass me the notes you have on this issue ? cheers, Arun |
||
ID: 4113 | Rating: 0 | rate: / | ||
I've emailed you the notes I could find. Hope they will be a little bit useful.
____________ D@H the greatest project in the world... a while from now! |
||
ID: 4116 | Rating: 0 | rate: / | ||
Andre wrote:
Good point. No, I've not been able to figure out why these system calls to get the time of day on linux are made a gazillion times per run. I do think that this might cause the massive difference in runtime between linux and windows. The runtime difference issue is already on the project's to-do list, so I'll make sure that whatever notes I have on this will be passed on to the next person trying to crack this issue. Do you know for sure if the problem still exists? Unfortunately, I don't remember the details but while docking was shut down there was a fix mentioned on the BOINC developers mailing list (might have been the forums) that sounded to me like it might have been causing a similar problem. It's been a long time so I don't recall if it was in the BOINC client or in the application framework that was distributed. Since it didn't affect most applications, it must have been in a support function or something. A polling loop with no delay in it that was calling the OS time of day function to check elapsed time was what it sounded like. Might have had something to do with a heartbeat function. I'll see if I can find it. ____________ The views expressed are my own. Facts are subject to memory error :-) Have you read a good science fiction novel lately? |
||
ID: 4123 | Rating: 0 | rate: / | ||
Hmmm, that sounds interesting. Yes, please let the project team know if you find something on that. In the meantime, Arun could try commenting out the boinc calls in the charmm code and see if the time calls are still being made. If not, then that points in the direction you are thinking.
____________ D@H the greatest project in the world... a while from now! |
||
ID: 4133 | Rating: 0 | rate: / | ||
Hmmm, that sounds interesting. Yes, please let the project team know if you find something on that. In the meantime, Arun could try commenting out the boinc calls in the charmm code and see if the time calls are still being made. If not, then that points in the direction you are thinking. Andre and David, Thanks for the informative discussion. I used gprof profiling tool and found that the times() function is executed 7.02% of the time, which took 5.12 seconds out of the total 72.98 seconds for this charmm execution. times() was the 3rd most time consuming function after enbfs8 and ephifs fortran calls. The output of strace also showed that times() function is called many times. Any suggestions ? David, any information you can find will be useful. cheers Arun |
||
ID: 4135 | Rating: 0 | rate: / | ||
So far, I haven't been able to find where I read it. IIRC, it was just a few months after Docking shut down at UTEP. I thought it was fixed at the time but I'm not sure on that and I can't find it in the archives or BOINC forums. You might have to ask on the BOINC DEV mailing list.
|
||
ID: 4137 | Rating: 0 | rate: / | ||
The amount of credit granted on this project (particuarly for my Linux running Opterons at 8-10 cr/h) is not all that high, very low actually, as it is still based on the benchmark system.
|
||
ID: 4244 | Rating: 0 | rate: / | ||
The amount of credit granted on this project (particuarly for my Linux running Opterons at 8-10 cr/h) is not all that high, very low actually, as it is still based on the benchmark system. I just created a thread to update you all on the next steps to move (finally) to beta. The issue with the credits has change since we discussed it in one of our thread. Unfortunately each work-unit does not take a deterministic amount of time. This discourages us from using a fixed amount of credits. We have done major changes to the code of charmm, and now we use the same charmm source for Windows and Linux with the same compiler optizations. This should preven significant differences betweeen the Windows and Linux versions (that were observed in the past). Sure we can increase the amount of credits per flops. Also we want to identify those volunteers who give us the best results. We are working on a web-page that classify the top results and its volunteers. Our goal is to have D@H in Beta on September 1. We are moving forward! Michela ____________ If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'! |
||
ID: 4247 | Rating: 0 | rate: / | ||
The amount of credit granted on this project (particuarly for my Linux running Opterons at 8-10 cr/h) is not all that high, very low actually, as it is still based on the benchmark system. G'Day Michela, Great to hear that the project is moving forward at a much quicker rate now. Things are starting to run more smoothly which will help. You and your teams rapid responses on the forum is a big plus and much appreciated. Thanks for info on what's happening and thanks also for the credit note. With regard to the new keys, I will finish what I currently have then detach and reattach each Linux machine. The Windows machine is going well now after detaching and reattaching twice. ____________ |
||
ID: 4259 | Rating: 0 | rate: / | ||
For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP).
|
||
ID: 4264 | Rating: 0 | rate: / | ||
For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP). I have found quite similar results for my dual core lappy (T7700@2.4 Ghz) under Win x64:
|
||
ID: 4274 | Rating: 0 | rate: / | ||
For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP). We definitely need to give you all more credits!!! I will look at this today. Michela ____________ If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'! |
||
ID: 4275 | Rating: 0 | rate: / | ||
My puter
has this C/h rates for various projects (decending order):
|
||
ID: 4276 | Rating: 0 | rate: / | ||
For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP). Hey, that sounds good! Any news yet? *grin* ____________ Bribe me with Lasagna!! :-) |
||
ID: 4280 | Rating: 0 | rate: / | ||
We definitely need to give you all more credits!!! Yes, agreed. The current ones are benchmark-depending because of quorum 1, that's a nice and easy way for cheaters to use optimized clients. Think about some fixed credits maybe. My puter has this C/h rates for various projects (decending order): Obviously you haven't crunched Cosmo for a while, Saenger. They fell down to a level under SETI! ;) ____________ Life is Science, and Science rules. To the universe and beyond Proud member of BOINC@Heidelberg |
||
ID: 4281 | Rating: 0 | rate: / | ||
We definitely need to give you all more credits!!! I claim considerably less than benchmarks would demand. It's quorum=1, but no benches. My puter has this C/h rates for various projects (decending order): Yes, I saw that one too late, it's a long time sample, currently they are at about 25.5 C/h, that's about what I claim. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki |
||
ID: 4282 | Rating: 0 | rate: / | ||
I claim considerably less than benchmarks would demand. It's quorum=1, but no benches. Hm, then why do your results always getting what they are claiming? It's the same for me - claim = grant, so it must be benchmark-depending. ;-) And I note down my WUs in an Excel-worksheet. The average per hour for the last WUs was always the same. With fixed credits it would have varied. ;) ____________ Life is Science, and Science rules. To the universe and beyond Proud member of BOINC@Heidelberg |
||
ID: 4283 | Rating: 0 | rate: / | ||
I claim considerably less than benchmarks would demand. It's quorum=1, but no benches. Hi, claim = grant because we no longer replicate. We are testing a post-processing algorithm for the results (clustering results based on deviations and energies). Trilce is out of town this week but when back next week she will tell us more about how the algorithm works. The positive thing is that we are collecting a lot of scientific data. Michela ____________ If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'! |
||
ID: 4284 | Rating: 0 | rate: / | ||
I've got claim=grant as well on Einstein and CPDN, both are not using benches like Docking. Only they both give me considerably more than I would claim with benches.
|
||
ID: 4291 | Rating: 0 | rate: / | ||
For comparison, on this machine I am averaging 13.8 per CPU hour here, 15.3 per CPU hour at SZTAKI, 19.4 per CPU hour at Rosetta and 21.1 per CPU hour at Einstein and 33.8 per CPU hour at QMC. (Q6600 @ 2.4GHz, Win XP). It has been 42 days since last post, so I was wondering if there has there been any progress on this Michela? The granted credit is still very low, it is even lower on computers that benchmark poorly, (as per my Linux machines compared to my Windows machines (even with the same hardware)). Thanks and keep up the good work. ____________ |
||
ID: 4458 | Rating: 0 | rate: / | ||
Being curious, and since I have several "dual boot" machines, I took a look at benchmarks for the same machine on windows VS Linux. (note: all are 64 bit Linux boinc versions, The Windows benchmarks are 64bit except where noted and on AMD processors )
|
||
ID: 4461 | Rating: 0 | rate: / | ||
OK, I've collected all the wus that exist in the database for those machine/os combinations. I've found several very short wus (<1400sec) and deleted them from all records. Here's the average cpu seconds per OS, sample size, and "claimed credit per second" based upon the benchmarks in the previous post using ((whetstone + Dhrystone) x 3600)/1728000.
|
||
ID: 4462 | Rating: 0 | rate: / | ||
I vote 33.8 per CPU hour like QMC. (Q6600 @ 2.4GHz, Win XP
|
||
ID: 4463 | Rating: 0 | rate: / | ||
Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (malicious sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users.
|
||
ID: 4468 | Rating: 0 | rate: / | ||
have you all seen project comparison like the one done by Boincstats?
project credit comparison
. Doing a quick count before my first cup of joe shows that out of the 50 projects listed, 25 pay more than Docking and 24 pay less than docking(might wanna recount for yourselves). Don't know what you/others can make out of this, or even how they come up with these numbers. Also, there's a comparison done by "allprojectstats" but can't find a link ATM.
|
||
ID: 4478 | Rating: 0 | rate: / | ||
The site I usually see mentioned when project admins are talking about cross-project-parity is at
|
||
ID: 4482 | Rating: 0 | rate: / | ||
The problem with SETI = 1 is that there is no way from the exported xml stats to see who is and who is not using an optimised science app, core client, or both, (and if they are, which ones).
|
||
ID: 4487 | Rating: 0 | rate: / | ||
Dear Admins and project managers.
|
||
ID: 4489 | Rating: 0 | rate: / | ||
Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users. LESS ??? and why would you want to start to grant less? You are already one of the lowest paying projects around. If you want to increase your participant base and get more volenteers to help with your projects you should be at least on par with the other projects. And - what does overclocking have to do with it ? Just asking. Overclocking just to gain more credit - isn't this part of the fun. to overclock machines to see how fast we can make them go? Nearly ALL my machines are highly overclocked. For example - over clocked |
||
ID: 4490 | Rating: 0 | rate: / | ||
Hi zeitgeistmovie.com,
|
||
ID: 4491 | Rating: 0 | rate: / | ||
Dear DD,
|
||
ID: 4492 | Rating: 0 | rate: / | ||
Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users. All my computers are over-clocked. My results must be good if your system validates them. I agree with the issue of modifying the software application. If the project starts allowing or promoting "optimized" apps without project testing and acceptance, then I will leave. I do not run any projects that allow optimized apps that are not under project control. |
||
ID: 4493 | Rating: 0 | rate: / | ||
Hi All, we are discussing about this item (increasing the credit), but we have some concerns. One is that if we increase the credit much, we will attract the kind of volunteers that do crazy things with the code or with the machines (sw modifications or overclocking) just to gain more credit, even to the price of returning us bad results. Although we are using a strategy to validate, we don't want to go back to the use of HR and things like that, because as you know it results in longer times to get the credit and discrepancies between the claimed credit of one user and other, which ultimately means less credit for many users. I have reduced my resources by 50%, while you check to see if the results from my over-clocked computers are bad . edit: I changed my mind...no need to have 50% of my computers giving you bad results. I have suspended all WUs until you verify that my computers are giving you good results. |
||
ID: 4494 | Rating: 0 | rate: / | ||
Hi j2satx, Users cannot recompile our application, so, they cannot run an optimized version of it. About your resources, we haven't had any complain with your results, when we get invalid results we usually send an email to the owner of the host. |
||
ID: 4495 | Rating: 0 | rate: / | ||
I'm sure j2satx was talking about optimized BOINC application, not your project app... Optimizing BOINC can indeed increase the amount of credits received when using the Benchmark criteria... ____________ Teddies at Docking@Home |
||
ID: 4497 | Rating: 0 | rate: / | ||
Hi Nite Owl, thank you for the correction.
|
||
ID: 4498 | Rating: 0 | rate: / | ||
I was talking about the project app. I don't think it is possible for the project to have any control over the BOINC Client, but someone has to monitor that WUs are processed within reasonable boundaries, to prevent getting excessive credits if someone has modified the BOINC Client. |
||
ID: 4499 | Rating: 0 | rate: / | ||
I don't understand why there has to be this big difference between my 'puters with different OS when I compare them up against the formula "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000??
|
||
ID: 4535 | Rating: 0 | rate: / | ||
Have found time to post some averages for my machines
|
||
ID: 4604 | Rating: 0 | rate: / | ||
*Bump*
|
||
ID: 4738 | Rating: 0 | rate: / | ||
*Bump* "BUMP" again. Hello project team, has there been any movement or progress with the increaing of credit granted by this project ??? It has been quite a while since anything has been heard. ____________ |
||
ID: 4917 | Rating: 0 | rate: / | ||
|
||
ID: 4919 | Rating: 0 | rate: / | ||
Changing the credit method means that work done before the change is worth less credit then after the change. That is not fair to all that crunch now. You might want to consider that in your deliberations.
|
||
ID: 4920 | Rating: 0 | rate: / | ||
Changing the credit method means that work done before the change is worth less credit then after the change. That is not fair to all that crunch now. You might want to consider that in your deliberations. Dear adrianxw, I agree to a point. As I stated I am fine with such, but the importance here is contribution to science / Docking at home, and if it's numbers that the crunchers need to be active for such a worthy cause, then it is numbers that a project needs to supply for those that are donating computing time. It is a win win situation no mater how one feels about the number obsession. It's not how you, me, or anyone else feels about it... it's what we can do to promote Docking@home to those that have the equipment to calculate the work units. Those you so call obsessed are also those that can do the most good for science, those are the one we need to cater to for the important work that Docking is trying to achieve. Not seeing that factor, is overlooking the most valuable variable in the distributed computing system. Again to me it is all about the science and not the numbers in whole, yet the science needs the obsession of serious crunchers. Sincerely, Heidi-Ann Kennedy ____________ Recognized by the Carnegie Institute of Science . Washington D.C. |
||
ID: 4921 | Rating: 0 | rate: / | ||
SETI@home changes their credits all the time. And they are supposed to be the freakin' benchmark. And all the projects are supposed to be matching SETI, which means all the projects have to constantly re-adjust too. *sigh*
|
||
ID: 4967 | Rating: 0 | rate: / | ||
I don't understand why there has to be this big difference between my 'puters with different OS when I compare them up against the formula "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000?? Because Docking uses credit = CPU seconds * (19 * Whetstone + Dhrystone) / 12,096,000 That's the reason it claims so low on most machines, because the floating point benchmark score is usually a lot lower than the integer one. Granting 1000 credits a week instead of 100 credits a day for the virtual 1000 MFlops and 1000 MIPs standard computer does not help as real ones have normally a Whetstone benchmark factor 2 higher than the Dhrystone value. @ the project staff: By the way, using benchmark based credits without quorum is really brain-damaged if you are concerned about the malicious behaviour of some people (as it appears you are judging from some comments here), as it is one of the easiest things to manipulate those benchmark scores. Furthermore the whole BOINC integrated benchmark stuff is seriously flawed as it varies a lot between different OSs or BOINC versions. Furthermore it does not value architectural improvements of the CPUs and the ecosystem which don't improve the benchmark scores. Just as an example, for the exactly same WU, an AMD Phenom running XP64 gets about 88 credits, a Phenom with XP32 104 credits, an AthlonX2 running WinXP32 gets 114 credits and an Intel Core i7 (with Hyperthreading reducing the performance per individual thread) under XP32 gets even 124 credits for the exact same work. That does not look right to me! Awarding one system almost 50% higher credits for the same work because it is actually slower for the achieved benchmark score is really the wrong way to tackle the credit issue ;) It isn't that hard to implement fixed credits as the WUs appear to be very even sized. It is the second best thing after flops based credits (i.e. really counting the executed operations in the code) and has the advantage that it could be implemented immediately. This credit stuff is important to a lot of crunchers. So if you want to secure or even extend your user base, you should think about starting to credit at least on par to other projects. Look at Spinhenge for instance! They adopted a fixed credit scheme half a year ago and it works really well. With such a scheme in place there is no way to "cheat" to get more credits than others. Besides using more resources to crunch of course ;) |
||
ID: 5108 | Rating: 0 | rate: / | ||
I don't understand why there has to be this big difference between my 'puters with different OS when I compare them up against the formula "claimed credit" = ((Whetstone + Dhrystone) x Cpu Seconds)/1728000?? I agree with what you say Cluster Physik and sadly I have stopped Docking on one computer due to the 27% drop in benchmark scores on my one Linux machine after "upgrading" Boinc from 5.10.21 to 6.4.5, so not impressed about that. What I earn on my Linux machines pales against a Windows machine even with the same components. (Also at SpinHenge the Windows apps run 30% or more faster than the Linux apps (at least on my AMD Opterons), so the fixed credit awarded on the Spinhenge project benefits Windows at the moment. They are still working on a faster Linux app (for over a year now I think but they say they will introduce one). Conan ____________ |
||
ID: 5111 | Rating: 0 | rate: / | ||
No feedback?
|
||
ID: 5112 | Rating: 0 | rate: / | ||
No feedback? We understand that credits are important for some of our volunteers. Still the issue of the credits is an open issue. We had, for a short time, a fixed amount of credits per result but then some of our volunteers felt that they were penalized because they had slow machines. If you feel that D@H does not reward you for your commitment, please feel free to donate your idle cycles to other projects. D@H is one of several projects that are looking at important scientific issues with the help of the public. This is marvelous and unique! We, the D@H, team are committed to the volunteer computing principle. We feel that any demonstration against a single project not only damages the work of students dedicated to their research but also (and more importantly) damages the other volunteers participating in the project and in general the volunteer computing paradigm. We are currently double-testing the new screensaver. One of our D@H volunteers identified a problem in the visualization and we have been able to fix the issue. Now we want to make sure that the code works properly before to distribute it. I want to point out how the issue with the visualization was found by one of you and this is just great because we feel that you all are part of our team. Once we have the new screensaver out, we will have a meeting to discuss the credit issue. Again, it is very challenging to meet all the expectations but once again we will do our best. Thank you for your support! Michela ____________ If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'! |
||
ID: 5114 | Rating: 0 | rate: / | ||
Maybe the project should have a look to the recent events at Aqua@home. Such things could happen here too as long as you don't fix your credits! If you really think about it you will see that the advantages by far outweigh the disadvantages (are there even ones?). Someone contributing less to the science, i.e. calculating less WUs, should also get less of that virtual reward called credits, very simple. But the real problem is the possibility to cheat. If you solved that, you can (and should) still try to figure out why AMD powered machines are quite a bit slower than Intel ones (may have something to do with the used compilers and/or options). If you feel that D@H does not reward you for your commitment, please feel free to donate your idle cycles to other projects. Thanks for that advice! By the way, have you recently looked how much not only me but also my whole team is contributing here at Docking? We are currently double-testing the new screensaver. One of our D@H volunteers identified a problem in the visualization and we have been able to fix the issue. Now we want to make sure that the code works properly before to distribute it. I want to point out how the issue with the visualization was found by one of you and this is just great because we feel that you all are part of our team. Frankly, I think most people deactivate such stuff either way. It may be nice to have for a project, but those seeing BOINC as some kind of competition (quite a lot if you ask me) are more interested in the performance of their computers and deactivate it. And those interested in the scientific value of their donated computing power may have a short look and deactivate it as well. To sum it up, a screensaver is a nice addon, but the credits are a basic ingredient. A project can be almost torn apart over this issue. In an ideal world one wouldn't need them, but BOINC was designed to encourage the competition between individuals as well as between teams to raise the donated computer power. And as in any competetion with a lot of people there are always some malicious guys between them trying to cheat. |
||
ID: 5115 | Rating: 0 | rate: / | ||
Just a small addon.
We feel that any demonstration against a single project not only damages the work of students dedicated to their research but also (and more importantly) damages the other volunteers participating in the project and in general the volunteer computing paradigm. I agree with that. But there is an easy prevention against such malicious action: just use fixed credits. The benchmark approach you are using now is completely bogus and very easy to manipulate which "not only damages the work of students dedicated to their research but also (and more importantly) damages the other volunteers participating in the project and in general the volunteer computing paradigm" as you put it. You don't control the environment in wich your application runs, so you can't rely neither on the benchmark values nor the time reported for the WUs . You should really calculate the credit independent of those values. Otherwise this old system would get 280k credits a day with the reported benchmark values. I won't let it calculate any WU in that state (doesn't work either on that old Linux kernel 2.4), but I guess it shows the problem. Total Credit 0 By the way, Aqua put fixed credits in place within a day of the incidents due to such manipulations there. It can't be that hard ;) |
||
ID: 5116 | Rating: 0 | rate: / | ||
. Whoohooo... The Brain-Power of: University of Delaware, The Scripps Research Institute,and the University of California - Berkeley creates a new Screenserver! Thats great! iwanthimiwanthimiwanthim.... MB ;-) |
||
ID: 5117 | Rating: 0 | rate: / | ||
I just want to clarify that I'm not
this guy
. He/she/it had obviously less ojections than me against a small demonstration. Just look at
this task
Task ID 6386453 Guess the problem I spoke of is obvious. |
||
ID: 5120 | Rating: 0 | rate: / | ||
You've got a point, Gipsel ;) |
||
ID: 5121 | Rating: 0 | rate: / | ||
. ME TOO! *LMAO* You guys must be really bored to invest your precious time in programming a lousy screensaver! ____________ My NEW BOINC-Site Why people joined BOINC Synergy... |
||
ID: 5122 | Rating: 0 | rate: / | ||
You've got a point, Gipsel ;) Unfortunately, yes. I wanted to avoid the current situation, but I guess your action will raise the pressure a bit. But I'm glad you created a new account and a new team for this, so you don't mess up all the statistics. The cleanup will be a lot easier if the issue is isolated to a single user. The mess created by Alliance Francophone over at Aqua is quite bad in my opinion. |
||
ID: 5123 | Rating: 0 | rate: / | ||
So what is happening now?
|
||
ID: 5124 | Rating: 0 | rate: / | ||
Double post. |
||
ID: 5125 | Rating: 0 | rate: / | ||
So what is happening now? We are working on a solution for the credit problem. We provide more detail as soon as we have found a good fix. Michela ____________ If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'! |
||
ID: 5127 | Rating: 0 | rate: / | ||
Geht doch.. Dark Gipsel...
|
||
ID: 5129 | Rating: 0 | rate: / | ||
Geht doch.. Dark Gipsel... Ja, jetzt. Zwischendurch war der Thread mal nach Message 5120 abgeschnitten und neue Posts versteckt. Aber jetzt ist alles wieder da. |
||
ID: 5132 | Rating: 0 | rate: / | ||
It's not the mess created by ALL the Alliance... First it was only two of them, second it was created in order to show that there was a very big problem with the points, don't forget this please. But everybody prefer scream about them instead of screaming on all the cheaters which were doing their BIG credits silently...... (sorry if my english is not perfect ;) ). |
||
ID: 5133 | Rating: 0 | rate: / | ||
The mess created by Alliance Francophone over at Aqua is quite bad in my opinion. Okay, it were two members of Alliance Francophone who created the mess. Better? And afaik they claimed on the Aqua board the issue was openly discussed in your forum before. But what I was referring to is that they used their normal accounts for this. Here, that _[Docker]_ guy created an account and a team specifically for this purpose. Instead, your two colleages have chosen to take the credits to their personal accounts and also on account for AF. They mixed it up. That is what I call a mess. |
||
ID: 5134 | Rating: 0 | rate: / | ||
This is pathetic.... somebody is upset about the amount of credit so they take to hacking results.... Childish!!! :wall:
|
||
ID: 5135 | Rating: 0 | rate: / | ||
This is pathetic.... somebody is upset about the amount of credit so they take to hacking results.... Childish!!! :wall: Don't be that fast with your judgement. That was not about the credit level. It was aimed to prove a severe hole in the current credit system. This flaw existed all the time and I'm sure a few even used it to gain credits in an unfair way. I agree this is a valid reason to leave a project. But Docking already works on a solution which will probably literally fixing the issue, as this is the easiest and safest way to prevent cheating. The introduction of a quorum would reduce the scientific output of the project and will only limit the "effectiveness" of these cheats but not eliminate it all together. |
||
ID: 5136 | Rating: 0 | rate: / | ||
I'd vote for fixed, server based credit. The BOINC benchmarking system is useless, as it unreasonably favors certain processors and OSes. It also encourages various cheats and hacks. Server based credits preclude most of the problems and will make your lives (and ours) much more peaceful. |
||
ID: 5137 | Rating: 0 | rate: / | ||
I'd vote for fixed, server based credit. The BOINC benchmarking system is useless, as it unreasonably favors certain processors and OSes. It also encourages various cheats and hacks. Server based credits preclude most of the problems and will make your lives (and ours) much more peaceful. Exactly! |
||
ID: 5138 | Rating: 0 | rate: / | ||
What are the pros of variable credits?
Three reasons why fix credits will not always be the right way to assign credits in Docking@Home
|
||
ID: 5139 | Rating: 0 | rate: / | ||
If the docking is about science and not credit, then why would you leave the science because people are messing with the credit. This is how a few bad apples spoil the bunch, by reacting to them. Let your computer do its honest work and keep contributing to this valuable project. If people are messing with the results the scientists will know and deal with the results appropriately, and it looks like they're adding some measures to protect the credit system. Yeah, it's fun to see yourself rise up and down in the ranks, but more satisfying to see quality results and publications from the aggregations of all these data. |
||
ID: 5140 | Rating: 0 | rate: / | ||
I know y'all are more experienced at this than I, but I"m wondering if there's a server-side way of determining from the uplaoded results how many cycles were carried out in the particular work unit? It totally makes sense that they're not deterministic and you can't know exactly how long or how much math each one will required, but in the final analysis when you have the data back, are there ways to figure out how many computations were used to generate the returned results? (refinement cycles * atoms * model 13 or 14 correction * credit/flop) or something like that? My thought then is that a failed work unit might still actually generate some credit since the machine expended effort on that. I had a lot of failed work units at some point that went on for hours without terminiating, or dropped after a few hours, but processor time was used and the RAC dropped a lot (takes almost a month to recover). This would also be harder to mess with. You'd be able to use some similar metrics about time to completion vs. similar systems, if more cycles are reported than can be reasonably done in the time the unit ran, etc. Just a thought,
|
||
ID: 5141 | Rating: 0 | rate: / | ||
If you had access to the source-code and could implement a counter or something like that, this would be possible with no problem (to at least check how much work was done in a asymptotic manner).
|
||
ID: 5144 | Rating: 0 | rate: / | ||
What are the pros of variable credits? Three reasons why fix credits will not always be the right way to assign credits in Docking@Home If you want I can give much more than three reasons why fixed credits are fundamentally better than benchmark based ones. But first let me answer to your reasoning and why I don't agree. 1. We hare running simulations with different proteins and ligands: each complex has a different size in terms of number of atoms and this can result in variable lengths for the jobs when moving from one complex to another. Each complex already belongs to a different WU series. You can easily assign different credit values to different series. That's no argument in my eyes. 2. We are running different docking models: two algorithms are used for the docking simulations, each with different characteristics and lengths in terms of flops and time. Model 13 is shorter than model 14 because it uses a different representation of the solvent. Same answer as for point 1. I'm very sure you test the input for a new WU series with a new molecule or a new docking model locally before you distribute it to the participants. Otherwise it would be wantonly negligent as one can't rule out a human error (e.g. just switching two values, selecting a wrong model, whatever). So you have already now information about the runtime for all molecule/model combinations in a controlled environment. Just use this information you have already at your hand to determine the fixed credit for each WU series! Virtually no increased effort from your side (opposed to the system you just proposed) and no drawbacks to the participants (I will come to this point later). 3. Last and most important, each job has a non-deterministic lengths (the non-determinism is intrinsic in the molecular dynamics simulation performed): we set up a certain number of random conformations per job for our ligand and for each conformation we set up a certain number of rotations, however, if during the docking simulation there is an energy violation, the simulation is terminated. The volunteer gets the credits for the computer work done to that point and no penalty is applied. The volunteer can just proceed with the simulation of the next job. We use the simulation results before the violation happens - nothing is wasted! We cannot predict in advance the energy violation but it is better to stop the jobs that are causing the violation rather than continuing them. Frankly, these cases appear to be quite rare. The execution times within a series are very uniform. And even if an energy violation is detected and the WU ends early, I'm sure you will know about it from the output file (you should!) and can grant the credits proportional to the normal value. Even if the WU length would be non-deterministic, you can still grant fixed credits as longs as the average is okay (look at POEM for example). How are we now preventing volunteers to get 1M credits per job? We changed the validation daemons so that we do not assign credits if: 1. So it would be okay to gradually increase the amount of cheating? 2. That means one has to be clever and invent a CPU name noone else is using? What about overclocked machines? If you are able to define a maximum credit value for a WU series, why it is impossible to just define an appropriate value everyone simply gets for a WU without further ado? Just from the top of my head there are a lot of reasons why this would still be worse than simply fixed (i.e. determined on server side independent of reported runtime or benchmark figures) credits. First of all a very fundamental one. As I said already, you can't control the environment your app is running in. You can't rely on any information that comes back. That includes not only the benchmark values but also the CPU name, OS and so on (as demonstrated by _[Docker]_). Frankly, you should also add some kind of plausibility check (just guessing there is none in place) to the results, as one can even tinker with the WUs itself (look like interpreted scripts). Generally there are a lot of problems with the BOINC benchmarks even if one does not manipulate them. The benchmark values vary a lot when comparing different BOINC versions and/or a different OS. As an example look at this computer running Client 5.10.45 under Linux and this completely identical machine just with WinXP. The Linux hosts registers benchmark values of only Measured floating point speed 747.84 million ops/secwhile under Windows I see Measured floating point speed 1335.22 million ops/sec Quite a difference I think. Furthermore there are sometimes severe problems with CPUs capable of dynamically changing their clockspeed. That are virtually all Notebook CPUs. But you may know that AMD CPUs downclock itself under light load to 800MHz or 1GHz, while under full load they may run at more than 3GHz. I've seen several systems where the benchmark caused too less load to "wake up" the CPU to the full clockspeed, whereas the WUs run at full throttle afterwards. This leads to severly underclaiming hosts and if the benchmark values get (correctly) recalculated at some point (there is a random componont to this problem) they would get their WUs marked as invalid (because of claiming much more than before) with your proposed system. Another problem is that there may be heavy (non-BOINC) activity on the system when the Benchmark is executed. This will also lead to severly reduced scores. But also overclaiming benchmarks are entirely possible without willfully manipulating something. Just think of the new Core i7 series and its "Turbo" feature. If only one core is loaded and/or the CPU temperature is low it raises the frequency of the loaded core(s). This leads easily to benchmark scores (which partly runs only single threaded!) not representative of the actual crunching speed. The hyperthreading feature makes this even worse actually. I think I gave already that example of 88 credits claimed by a Phenom or Core2 and 124 credits claimed by a Core i7 for the same WU. And this problem will continue to get more pronounced as the CPU manufactures will implement further features helping the crunching speed but not the benchmark score or increase the functionality of such automated load and temperature dependend clocking schemes for CPUs like Cool&Quiet, SpeedStep or that Turbo feature. All in all, even if the benchmark could not be manipulated, it still lacks the property to represent the crunching power of a system. So why on earth do you want to base the credits on it? With the provisions you have taken you try the repair a concept that is literally fubar. Maybe you should ask yourself why most projects (especially the bigger ones) use fixed credits. The simple answer is that it is probably the easiest and safest way. |
||
ID: 5146 | Rating: 0 | rate: / | ||
If you had access to the source-code and could implement a counter or something like that, this would be possible with no problem (to at least check how much work was done in a asymptotic manner). AFAIK, you get the CHARMM source code with your license. After all you have to compile it for the different platforms. But if one is allowed to modify it is beyond my knowledge. Maybe you are right it is not allowed. Otherwise I can't imagine why we still have no 64Bit binaries ;) I really think it is not necessary to do a real determination of the work done per WU. A characterization is completely sufficient. This can be easily done without access to the source. |
||
ID: 5147 | Rating: 0 | rate: / | ||
Applaud your approach to help resolve.
|
||
ID: 5148 | Rating: 0 | rate: / | ||
[i]Generally there are a lot of problems with the BOINC benchmarks even if one does not manipulate them. The benchmark values vary a lot when comparing different BOINC versions and/or a different OS. As an example look at this computer running Client 5.10.45 under Linux and this completely identical machine just with WinXP. The Linux hosts registers benchmark values of only[quote]Measured floating point speed 747.84 million ops/secwhile under Windows I see I totally agree. The BOINC benchmarking system has always been a mess and the introduction of the newer processors has pretty much rendered it invalid. The i7 overclaims so badly it's ridiculous. The BOINC developers have done nothing to fix the problem, which makes server assigned credit the only system that is currently at all equitable. |
||
ID: 5151 | Rating: 0 | rate: / | ||
What is our next step? We will put a survey open for the next 20 days to collect your votes and set up the credit system accordantly. Has the survey been posted? |
||
ID: 5152 | Rating: 0 | rate: / | ||
What is our next step? We will put a survey open for the next 20 days to collect your votes and set up the credit system accordantly. If not, we can vote by acclamation ;) Every participant expressing a clear opinion so far favored server assigned (fixed) credits. Anyone against it? |
||
ID: 5153 | Rating: 0 | rate: / | ||
I'm also for server assigned credits. Something is definitately up with the current system. I just went through some of my workunits and found that different computers are getting different amounts of credit for the same work. My laptop cranks out a workunit in about the same time as my workstation (similar speed cores, the workstation just has more of them). They complete a similarly sized workunit in about the same time, but the laptop is receiving 1/4 the credit of the workstation. Very weird. |
||
ID: 5155 | Rating: 0 | rate: / | ||
They complete a similarly sized workunit in about the same time, but the laptop is receiving 1/4 the credit of the workstation. Very weird.That is the problem with the changing clockspeed during the benchmark I spoke of. One of the reasons why benchmark based credits are simply crap. |
||
ID: 5157 | Rating: 0 | rate: / | ||
They complete a similarly sized workunit in about the same time, but the laptop is receiving 1/4 the credit of the workstation. Very weird.That is the problem with the changing clockspeed during the benchmark I spoke of. One of the reasons why benchmark based credits are simply crap. Ah, that would totally explain it: Going back through work units just on the laptop I see that there's been a bunch of them lately taking 18000 seconds to complete (the p38 work units). They were usually garnering about 87 credits/work unit, consistent with my other computers as well, but recently the laptop has only been getting 25 credit/workunit even though the time to completion didn't change. This would explain why the RAC for the laptop has been steadily falling for the last few days even though it's been running full tilt. No wonder my rank is slipping even though I didn't change anything. Sigh. |
||
ID: 5158 | Rating: 0 | rate: / | ||
Server based credit all the way :-)
|
||
ID: 5162 | Rating: 0 | rate: / | ||
Server based credit all the way :-) Yup, that was totally it actually. I just ran it down from the logs. The problem: Laptop is getting only 1/4 of the credit per work unit that it had, or of computers of similar configuration. Problem started recently. I went through the logs. I took my laptop into a meeting to take some notes. While there it did one of its random benchmark tests that seem to occur once in a while. I leave BOINC running as the meetings are only an hour or two and we have this extended battery pack thing on our computers that gives them 6 hours of additional life to the main battery pack. However, off line-current the machine slows down the processor. I just ran benchmarks in both modes and it's about 1/4 or more slower when disconnected from power. It hadn't done a benchmark update SINCE then and so hasn't noticed that the processors are back to full speed. GAH, all these days of full-speed workunits for 1/4 credit. I TOTALLY vote for server-assigned credits. Tony |
||
ID: 5163 | Rating: 0 | rate: / | ||
Could the admin please delete this host, and remove entire credit of the user.
|
||
ID: 5180 | Rating: 0 | rate: / | ||
Pretest a new batch of WUs. Set a fixed amount of credits for it.
|
||
ID: 5189 | Rating: 0 | rate: / | ||
Same for Einstein, within one science run the time needed to complete one unit is differing up to 30%.
|
||
ID: 5191 | Rating: 0 | rate: / | ||
Duplicate post, kindly remove it ;-) |
||
ID: 5192 | Rating: 0 | rate: / | ||
+1 from me for fixed credits.
|
||
ID: 5203 | Rating: 0 | rate: / | ||
fixed credits please! |
||
ID: 5205 | Rating: 0 | rate: / | ||
+ 1 from me. |
||
ID: 5206 | Rating: 0 | rate: / | ||
I think, the onliest possibility to solve the credit problem is fixed credit |
||
ID: 5207 | Rating: 0 | rate: / | ||
fixed Credits rock!:P
|
||
ID: 5208 | Rating: 0 | rate: / | ||
I think, the onliest possibility to solve the credit problem is fixed credit Please fixed Credits for me! (and a 64-Bit-Version would be nice) |
||
ID: 5209 | Rating: 0 | rate: / | ||
+1 from me for fixed credits.
|
||
ID: 5210 | Rating: 0 | rate: / | ||
+1 from me for fixed credits.
|
||
ID: 5211 | Rating: 0 | rate: / | ||
My vote for fixed credits! |
||
ID: 5212 | Rating: 0 | rate: / | ||
My vote for fixed credits! |
||
ID: 5213 | Rating: 0 | rate: / | ||
my vote for fixed |
||
ID: 5214 | Rating: 0 | rate: / | ||
vote for fixed credits! |
||
ID: 5215 | Rating: 0 | rate: / | ||
Yes, we can. <- is like the choice between plague and cholera.
|
||
ID: 5216 | Rating: 0 | rate: / | ||
+1 from me for fixed credits.
|
||
ID: 5217 | Rating: 0 | rate: / | ||
Fixed Credits is the only chance for D@H to avoid cheating. |
||
ID: 5218 | Rating: 0 | rate: / | ||
In fact, all WU running the same time (+-10%). Set fixed credits for this and no Docker-Account made trouble...
|
||
ID: 5219 | Rating: 0 | rate: / | ||
[X]fixed Credits |
||
ID: 5220 | Rating: 0 | rate: / | ||
+1 for fixed credits please! |
||
ID: 5221 | Rating: 0 | rate: / | ||
We are testing D@H with fix credits.
|
||
ID: 5222 | Rating: 0 | rate: / | ||
Hi, quite
NEW
and the first post.
|
||
ID: 5223 | Rating: 0 | rate: / | ||
[x] fixed credits |
||
ID: 5224 | Rating: 0 | rate: / | ||
We are testing D@H with fix credits. Nice to hear Michela! As mentioned elsewhere 39 votes for fixed credits from Phoenix Rising. Really though, WU's lengths are always pretty close. Just one figure would work for most of us. Sometime we get a little extra, and sometimes a little less for our time. Yet should workout in the long haul. Bet you never thought crunchers could be such a pain in the butt ~smiles~ Take care, Heidi-Ann ____________ Recognized by the Carnegie Institute of Science . Washington D.C. |
||
ID: 5225 | Rating: 0 | rate: / | ||
Fixed Credits - will make my Day. Signed!
|
||
ID: 5227 | Rating: 0 | rate: / | ||
[x] fixed Credits.
|
||
ID: 5229 | Rating: 0 | rate: / | ||
[x] fixed credits, of course!
|
||
ID: 5232 | Rating: 0 | rate: / | ||
[x] fixed credits, of course! Off topic response Montana has some highways, but I cannot confirm if they still do, yet has been about ten years since I had been on them. ____________ Recognized by the Carnegie Institute of Science . Washington D.C. |
||
ID: 5234 | Rating: 0 | rate: / | ||
I have been sitting back quietly crunching for Docking@home for many months. I am not one of the big heavy weights who crunch only for cobbles. I only have four quad core systems dedicated to Docking plus a few part time crunchers. After reading this thread for the third time ... it has been going on for over a year now ... several things are pretty obvious.
|
||
ID: 5235 | Rating: 0 | rate: / | ||
Anybody saw fixed credits yet?
|
||
ID: 5236 | Rating: 0 | rate: / | ||
We are testing fixed credits in a mirrored system. We expect to have everything ready in the next couple of days. |
||
ID: 5237 | Rating: 0 | rate: / | ||
We are testing fixed credits in a mirrored system. We expect to have everything ready in the next couple of days. Ah, good news and thanks for the infos. It's nice to see that you're working on it. |
||
ID: 5238 | Rating: 0 | rate: / | ||
Thanx for the info!
|
||
ID: 5239 | Rating: 0 | rate: / | ||
Delighted to hear that we will soon be getting fixed credits. It is the only way to go.
|
||
ID: 5242 | Rating: 0 | rate: / | ||
It took more than 2 days, because of the non-deterministic nature of our tasks, but we are ready to move to fixed credits tomorrow, after we finish the distribution of 1k1i-trypsin-model14
|
||
ID: 5260 | Rating: 0 | rate: / | ||
Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated |
||
ID: 5273 | Rating: 0 | rate: / | ||
Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated Thanks to you Trilce, Michela and the rest of the Docking team for your work on this issue. I have been waiting for a very long time for something to be done as my poor old Opterons running Linux, have been hammered credit wise, particularly by Windows machines. I have made comments before about the low granted credit (as long as you did not change your benchmarks) so the amount being awarded at the moment is great, leave it there please. I have now increased my preference share for Docking. Keep up the great work, Conan. ____________ |
||
ID: 5275 | Rating: 0 | rate: / | ||
Haven't been around much lately... and now what do my tiny little eyes see?
|
||
ID: 5282 | Rating: 0 | rate: / | ||
Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated Many thanks for fixed credits. :) |
||
ID: 5283 | Rating: 0 | rate: / | ||
Fixed credits are great, really. Very much appreciated.
|
||
ID: 5285 | Rating: 0 | rate: / | ||
Hello everybody,
|
||
ID: 5286 | Rating: 0 | rate: / | ||
Claimed credits came from the benchmarks (still) My Computer has benches as follows: Measured floating point speed 3239.93 million ops/sec Measured integer speed 8951.05 million ops/sec claimed credit = ([whetstone]+[dhrystone]) * wu_cpu_time_in_sec / 1728000 For my machine per hour: (3239,93 + 8951,05) * 3600 / 1728000 = 25,397875 My claims here are very consistent at 21 credits per hour since a very long time. For example this one, # 6947368 : Official formula: (3239,93 + 8951,05) * 8267.269 / 1728000 = 58,325295737 but: Claimed credit 48.3207651724276 Granted credit 126.673348 So I'm definitely not claiming according to my benchmarks. I'm not getting it as well, I'm getting a little bit more than double my official claim would be. In other benchmark based projects my claims are calculated correct, so the faulty calculation is not originated in my machine. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki |
||
ID: 5288 | Rating: 0 | rate: / | ||
Claimed credits came from the benchmarks (still) Sure you do. Look here for an explanation! How it all started ;) |
||
ID: 5289 | Rating: 0 | rate: / | ||
Sure you do. Look here for an explanation! How it all started ;) Thanks, I hadn't read all posts, since nothing really happened with the meager credits after the initial posts here and I lost contact a bit. How did you know this formula? is it stated anywhere? Is there any explanation for using a non-standard formula? I understand to leave benchmarks and move to flop-count or fixed credits, but altering the orginal formula makes absolutely no sense to me if the buggy benches are still used. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki |
||
ID: 5291 | Rating: 0 | rate: / | ||
I'm smart ;)Sure you do. Look here for an explanation! How it all started ;)How did you know this formula? No, seriously, hmm, of course I am, but I simply looked at the claimed credits of two different computers and their benchmark scores. If you assume the credit formula looks somehow similar to the original one (i.e. just with two weighting factors added and another divider at the end) you have enough information to solve that small set of linear equations. is it stated anywhere?Yes, in my post above ;) Otherwise no. But you can easily check that it's true. Is there any explanation for using a non-standard formula?Not really. I think the project thought we weight the benchmarks according to the assumed workload. The calculations or floating point dominated, so let's weight the FP benchmark more. As they have seen the credits were extremely low afterwards, they have altered the formaula a second time so the 1000 MFFLOP/1000 MIPS computer would get 1000 credits a week (instead of 100 a day). I understand to leave benchmarks and move to flop-count or fixed credits, but altering the orginal formula makes absolutely no sense to me if the buggy benches are still used.You are completely right. But that's beating a dead horse as we have fixed credits now. |
||
ID: 5292 | Rating: 0 | rate: / | ||
You are completely right. But that's beating a dead horse as we have fixed credits now. Not quite dead yet, as it's still used for the calculation of the now rather worthless claims. I'd like this horse really dead ;) But of course you're right, it's no rather unimportant. I'd like to know what numbers of credits per hour they take as their standard. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki |
||
ID: 5299 | Rating: 0 | rate: / | ||
Fixed credits are used now. We will keep tunning the flops constant in the following weeks. Your input will be greatly appreciated Thank you, thank you THANK YOU! |
||
ID: 5305 | Rating: 0 | rate: / | ||
Message boards : Number crunching : Cobblestones
Database Error: The MySQL server is running with the --read-only option so it cannot execute this statement
array(3) { [0]=> array(7) { ["file"]=> string(47) "/boinc/projects/docking/html_v2/inc/db_conn.inc" ["line"]=> int(97) ["function"]=> string(8) "do_query" ["class"]=> string(6) "DbConn" ["object"]=> object(DbConn)#149 (2) { ["db_conn"]=> resource(384) of type (mysql link persistent) ["db_name"]=> string(7) "docking" } ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> &string(51) "update DBNAME.thread set views=views+1 where id=305" } } [1]=> array(7) { ["file"]=> string(48) "/boinc/projects/docking/html_v2/inc/forum_db.inc" ["line"]=> int(60) ["function"]=> string(6) "update" ["class"]=> string(6) "DbConn" ["object"]=> object(DbConn)#149 (2) { ["db_conn"]=> resource(384) of type (mysql link persistent) ["db_name"]=> string(7) "docking" } ["type"]=> string(2) "->" ["args"]=> array(3) { [0]=> object(BoincThread)#3 (16) { ["id"]=> string(3) "305" ["forum"]=> string(1) "2" ["owner"]=> string(3) "108" ["status"]=> string(1) "0" ["title"]=> string(12) "Cobblestones" ["timestamp"]=> string(10) "1249840247" ["views"]=> string(4) "7729" ["replies"]=> string(3) "143" ["activity"]=> string(19) "8.1692913894747e-85" ["sufferers"]=> string(1) "0" ["score"]=> string(1) "0" ["votes"]=> string(1) "0" ["create_time"]=> string(10) "1212769262" ["hidden"]=> string(1) "0" ["sticky"]=> string(1) "0" ["locked"]=> string(1) "0" } [1]=> &string(6) "thread" [2]=> &string(13) "views=views+1" } } [2]=> array(7) { ["file"]=> string(63) "/boinc/projects/docking/html_v2/user/community/forum/thread.php" ["line"]=> int(184) ["function"]=> string(6) "update" ["class"]=> string(11) "BoincThread" ["object"]=> object(BoincThread)#3 (16) { ["id"]=> string(3) "305" ["forum"]=> string(1) "2" ["owner"]=> string(3) "108" ["status"]=> string(1) "0" ["title"]=> string(12) "Cobblestones" ["timestamp"]=> string(10) "1249840247" ["views"]=> string(4) "7729" ["replies"]=> string(3) "143" ["activity"]=> string(19) "8.1692913894747e-85" ["sufferers"]=> string(1) "0" ["score"]=> string(1) "0" ["votes"]=> string(1) "0" ["create_time"]=> string(10) "1212769262" ["hidden"]=> string(1) "0" ["sticky"]=> string(1) "0" ["locked"]=> string(1) "0" } ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> &string(13) "views=views+1" } } }query: update docking.thread set views=views+1 where id=305