Posts by Memo

1)

Message boards : Number crunching : Cobblestones

( Message 4045 )
Posted 3347 days ago by Memo
If it is not too late here are my 2 cents...

If WUs are about the same size I think is better to have a static credit per WU. This gives little more work to the admins but I think at the end both volunteers and admins will be happier. Plus from my past experience with charmm being so crazy at times I remember Andre was able to control credit better this way, that is give similar credit than other projects.
2)

Message boards : Docking@Home Science : Charmm and the project

( Message 3901 )
Posted 3418 days ago by Memo
Hi everyone, I am glad that the project is moving forward. I hope that it gets back on track really soon and that I can donate some cycles (CPU and Brain if possible). I remember that during the last days of the project at UTEP we were waiting for a new charmm version can someone inform me if the new version has been delivered and tested? also what is this version number?
3)

Message boards : Number crunching : Docking Schedule

( Message 3503 )
Posted 3655 days ago by Memo
go docking go!!!

4)

Message boards : Number crunching : Docking Schedule

( Message 3474 )
Posted 3673 days ago by Memo
If the prize is good I'm in :D
5)

Message boards : Number crunching : host distribution looks screwey

( Message 3443 )
Posted 3700 days ago by Memo
the 4 billion I suspect is an integer error, I haven't had any time to fix it, hopefully I will be able this week.

As for the two replicas in the shared memory let see if a volunteer has one of those computers and attaches it to compute them, otherwise those WUs will just be canceled.
6)

Message boards : Number crunching : Docking Schedule

( Message 3426 )
Posted 3703 days ago by Memo
I'm surprised how short a time it took to get rid of 1200 workunits :-) Let's see how these last 30 or so are doing and if they stay too long, we'll have to cancel them like we did last time.

Thanks everybody for your help!

The D@H Team

When all WUs are gone, we will send everybody a personal email to detach from the project, because the url will change later on. When we are back online with the new url another email will follow to please attach again.

Man! Those last ~30 results waiting to be sent must be some really funky flavor. None of my machines seem to match the HR. I have

OSX/PPC
OSX/Intel
WinXP/Pentium D
WinXP/Core 2 Quad
WinXP/Xeon x5355
WinXP/AMD X2
WinXP/Pentium M

And all are reporting the work is for something else.



HR information on the WUs left might be a good idea, this way we can look for that type of machine... let's see who gets the last D@H Texan WU :)
7)

Message boards : Application Info : charmm_5.7_i686-pc-linux-gnu

( Message 3384 )
Posted 3719 days ago by Memo
I think that charmm source was not touched in this case but I can ask around.
8)

Message boards : Number crunching : what casued this error?

( Message 3377 )
Posted 3724 days ago by Memo
Do you have a lot of projects attached to this client? or anything else that might help us to reproduce this error at the lab?

We had a problem with linux boxes with exit code 1 but it was related to the stack problem which I think we don't have with the macs.
9)

Message boards : Cafe Docking : Personal Milestones

( Message 3359 )
Posted 3733 days ago by Memo
A million!

:)
10)

Message boards : Number crunching : Still hold off on connecting Windows hosts?

( Message 3290 )
Posted 3741 days ago by Memo
Its ok to add windows machines, I have several windows boxes and I havent seen any problem, plus remember we are in alpha testing so is better to discover problems now than at a latter time.


Next 10 posts