> I'm no expert on this, but this is just my observations over time.
>
> Generally, the validator waits for two or three completed results before it
> awards any credit.
Three. And they have to be valid, i.e. contain no memory dumps or other non-numeric stuff.
At this point, it picks the lowest of those two/three
> claimed credits and awards it to all the completed WUs.
At this point, it does a "matching" of every result against each other, and
determines the "best guess" (remember that in the real world, there is no
identity of numbers....)
The "best guess" is declared the "canonical result" which additional results
can be matched against, and if they fit, they get exactly the same credit
as the ones that matched before.
Non-matching results are declared INVALID and dropped.
If I read my
> information right, this is theoretically how BOINC is designed to grant
> credit.
There are lots of adjustable things...
> However, there are some flukes. Juerschi's is a great example:
> http://einsteinathome.org/workunit/307539
>
> Notice that the awarded credit was the lowest amount between the two units
> (bottom listings) that reported on 19 Jan 2005. Then the final result (3rd
> from top) was reported on 21 Jan 2005. From looking at this, the validator
> probably awarded credit after receiving the first two results.
This should be considered an error: the matching algorithm takes three results to determine the canonical result. If there are less than three, the validator will wait for more. If a result proved to be INVALID from the very beginning (file unreadable at a certain level, including bad characters inside...) another job is generated to ensure the "quorum" can be reached.
Then, when the
> third one came in two days later, it was awarded the same amount as had
> already been credited to that WU.
All results that match the canonical one are assigned the same credit.
I've seen cases where this happened in my
> own WUs. I received one from P@H to crunch somewhere around 15 Jan or so,
> which had received two results back in December, both of which had been
> granted credit. I came in somewhat under their claimed credits, but I was
> award the same amount they had received. I don't have that info available to
> me right now, but I could try and look it up later if anyone was interested.
The P@H validator may work completely different!
> Now having said all of this, just rooting around on a couple of the examples
> given seems to indicate something fishy to me. Juerschi's WU indicates two
> similar machines (Athlon XP 2600) whose only differences are one was a mobile
> (laptop) chip and the other appears to be desktop, which returned similar
> benchmark results. Yet, somehow the desktop smoked the laptop by almost a
> factor of 10 (4,969.91 secs vs 45,241.64 secs). Seems to me that something may
> be wrong in the validatator code somewhere that accepted an incomplete result
> from the "really fast" host as a completed WU.
This should not happen.
There may be a huge difference in the last stage of the code though that would
produce different run times: memory usage. The laptop probably has less RAM and had to swap a lot. Whether this additional time would be included in the credit claim, I don't know for sure.
Another source for time differences may be a spurious reset of accumulated CPU time which is being investigated right now... judging from the phone discussion from the desk next to me...
> And finally, this is no attempt to trash or contradict Bernd - so no offense
> to the staff! I'm just throwing in what I seem to remember as the way the
> system supposedly works, along with a few semi-educated guesses.
... which may or may not be wrong. Thanks a lot!
Thanks for the update on that - you've answered some of my own questions and patched up the holes in my answer. I was just curious though, could it be possible that the result we were discussing (wuid=307539) had the credit granted manually after the first two results came in and then the third result returned a couple of days later and was deemed canonical so it received the same amount of credit? It occurred to me that with a lot of manual credit granting going on to pacify the natives that there might be some flaky credit like this one floating around.
I still think a lot of this will clear up after forcing cc4.13 out for the newer versions, but again, that's just me making a hunch.
It's kind of funny, but trying to guess answers about the project system from an outside perspective is finally forcing me to try out that "black box" testing I was taught when I was studying computer science...
> I was just curious though, could it be
> possible that the result we were discussing (wuid=307539) had the credit
> granted manually after the first two results came in and then the third result
> returned a couple of days later and was deemed canonical so it received the
> same amount of credit? It occurred to me that with a lot of manual credit
> granting going on to pacify the natives that there might be some flaky credit
> like this one floating around.
No.
When I switched the scheduler a couple of weeks ago, I did manually grant credit for some workunits (WU) that had to be cancelled since they were not compatible with the new scheduler. But I haven't done this for any of the new WU issued since the scheduler was changed. All WU in the database are of this form.
So to summarise we have:
A validator picking random amounts of claimed credit to grant.
A faulty app (v4.71) which reports only the CPU time since the WU was resumed, thus claiming lower credit.
And I have seen quorums of 3 and 4, so i'm not sure what the official size is meant to be.
And on the faulty app point, it is my understanding that WU's that were intended to be processed with this app will only be processed with this app. Later WU's that are required to use v4.72 automatically download that app when it is required. So all of the WU's that were designed for 4.71 will process with that app no matter what BOINC version you are running.
> So to summarise we have:
> A validator picking random amounts of claimed credit to grant.
> A faulty app (v4.71) which reports only the CPU time since the WU was
> resumed, thus claiming lower credit.
> And I have seen quorums of 3 and 4, so i'm not sure what the official size
> is meant to be.
>
> And on the faulty app point, it is my understanding that WU's that were
> intended to be processed with this app will only be processed with this app.
> Later WU's that are required to use v4.72 automatically download that app when
> it is required. So all of the WU's that were designed for 4.71 will process
> with that app no matter what BOINC version you are running.
> I've participated in all the Projects so far & have always felt Seti gave
> out the most credit per WU for the time it takes to Crunch the WU's, which
> project gives more per WU than Seti ... ???
>
> I know at Seti is where I can reach the highest RAC (Around 4300) (Compared to
> only about 2500 at the other projects) so I just always assumed Seti was
> giving out the more credit than the other projects & other people have
> stated about the same thing ...
>
>
Apart what I have calculated (or noticed):
Seti: about 10 credits per hour
CPDN: about 15 credits per hour
LHC: about 8 credits per hour
Predictor: 8-9 credits per hour
Einstein: 8-9 credits per hour (not enough results to be significant)
pirates: about 4-7 credits per hour (worsest project to get work!!!)
CPDN: I need for 1 "trickle" (=94.xx credits) 7 hours and 20 minutes, that makes aprox. 15 credits per hour.!!!
Now which project gives the most credits? - no offense -
Perhaps a good question but i don't see it that way.
My question wouldn't be "How do it get most credit" but "Which CPU put to particular project". e.g.:
CPDN and SETI does very good on Intel P4; AMD was very good on CPDN classic.
Einstein and Predictor does good on AMD.
I think one can not say so in general: a type of CPU must be taken into account. Also note, that Linux was found to be faster on CPDN than M$ Win.
Again - no offense - just a point of view...
> Apart what I have calculated (or noticed):
>
> Seti: about 10 credits per hour
> CPDN: about 15 credits per hour
> LHC: about 8 credits per hour
> Predictor: 8-9 credits per hour
> Einstein: 8-9 credits per hour (not enough results to be significant)
> pirates: about 4-7 credits per hour (worsest project to get work!!!)
>
> CPDN: I need for 1 "trickle" (=94.xx credits) 7 hours and 20 minutes, that
> makes aprox. 15 credits per hour.!!!
>
> Now which project gives the most credits? - no offense -
>
> greetz from Switzerland
> littleBouncer
>
>
>
>
>
I haven't run CPDN in awhile littlebouncer so you could be right, I heard they did raise the Credit Per Trickle over there some time ago. I was just going by when they where only giving out 76.4 Credits Per Trickle. But I still think Seti gives out more Credit than any of the other Projects or at the very least it's running neck and neck with CPDN ...
Your figure of 10 credits per hour for Seti is way lower than what I was getting over there, I was getting more like 15 per hour which would put it neck & neck with CPDN now ... But that could be the differences in our Computers ... Friendly :)
@ Honza, I have run all the Projects to some extent and as far as I'm concerned the Intel P4's with HyperThreading are hard to beat at any of them. Unless you have a Dual CPU Computer like the Xeon's or Opteron then you will do just fine at any of the Projects with the P4 HT CPU's by running them in HT Mode ...
That's correct - Intel's HyperThreading gives another 15-20% of extra performace.
It takes about 11 hours to complete an Einstein WU on P4/3GHz/HT and about 6 hours on AMD64 3000+. If i'm correct, both are doing about 4-5WUs/day.
I'm not saying that any CPU is better.
I'm not looking for what CPU gives most credit - that's a bit unimportant to me.
My idea and experience is that P4s are doing great on CPDN/BOINC (an Intel optimalized compiler), AMD was doing great on CPDN Classic (with no optimalization in code).
It may be usefull to put P4s on some project and AMDs to another to maximize it's performance; measurement must be in WUs/day not the credit.
You may be right that P4 is a good option when running any BOINC project (regardless of high power consumption, high temperatures etc.).
> @ Honza, I have run all the Projects to some extent and as far as I'm
> concerned the Intel P4's with HyperThreading are hard to beat at any of them.
> Unless you have a Dual CPU Computer like the Xeon's or Opteron then you will
> do just fine at any of the Projects with the P4 HT CPU's by running them in HT
> Mode ...
>
> I'm no expert on this, but
)
> I'm no expert on this, but this is just my observations over time.
>
> Generally, the validator waits for two or three completed results before it
> awards any credit.
Three. And they have to be valid, i.e. contain no memory dumps or other non-numeric stuff.
At this point, it picks the lowest of those two/three
> claimed credits and awards it to all the completed WUs.
At this point, it does a "matching" of every result against each other, and
determines the "best guess" (remember that in the real world, there is no
identity of numbers....)
The "best guess" is declared the "canonical result" which additional results
can be matched against, and if they fit, they get exactly the same credit
as the ones that matched before.
Non-matching results are declared INVALID and dropped.
If I read my
> information right, this is theoretically how BOINC is designed to grant
> credit.
There are lots of adjustable things...
> However, there are some flukes. Juerschi's is a great example:
> http://einsteinathome.org/workunit/307539
>
> Notice that the awarded credit was the lowest amount between the two units
> (bottom listings) that reported on 19 Jan 2005. Then the final result (3rd
> from top) was reported on 21 Jan 2005. From looking at this, the validator
> probably awarded credit after receiving the first two results.
This should be considered an error: the matching algorithm takes three results to determine the canonical result. If there are less than three, the validator will wait for more. If a result proved to be INVALID from the very beginning (file unreadable at a certain level, including bad characters inside...) another job is generated to ensure the "quorum" can be reached.
Then, when the
> third one came in two days later, it was awarded the same amount as had
> already been credited to that WU.
All results that match the canonical one are assigned the same credit.
I've seen cases where this happened in my
> own WUs. I received one from P@H to crunch somewhere around 15 Jan or so,
> which had received two results back in December, both of which had been
> granted credit. I came in somewhat under their claimed credits, but I was
> award the same amount they had received. I don't have that info available to
> me right now, but I could try and look it up later if anyone was interested.
The P@H validator may work completely different!
> Now having said all of this, just rooting around on a couple of the examples
> given seems to indicate something fishy to me. Juerschi's WU indicates two
> similar machines (Athlon XP 2600) whose only differences are one was a mobile
> (laptop) chip and the other appears to be desktop, which returned similar
> benchmark results. Yet, somehow the desktop smoked the laptop by almost a
> factor of 10 (4,969.91 secs vs 45,241.64 secs). Seems to me that something may
> be wrong in the validatator code somewhere that accepted an incomplete result
> from the "really fast" host as a completed WU.
This should not happen.
There may be a huge difference in the last stage of the code though that would
produce different run times: memory usage. The laptop probably has less RAM and had to swap a lot. Whether this additional time would be included in the credit claim, I don't know for sure.
Another source for time differences may be a spurious reset of accumulated CPU time which is being investigated right now... judging from the phone discussion from the desk next to me...
> And finally, this is no attempt to trash or contradict Bernd - so no offense
> to the staff! I'm just throwing in what I seem to remember as the way the
> system supposedly works, along with a few semi-educated guesses.
... which may or may not be wrong. Thanks a lot!
S
Steffen, Thanks for the
)
Steffen,
Thanks for the update on that - you've answered some of my own questions and patched up the holes in my answer. I was just curious though, could it be possible that the result we were discussing (wuid=307539) had the credit granted manually after the first two results came in and then the third result returned a couple of days later and was deemed canonical so it received the same amount of credit? It occurred to me that with a lot of manual credit granting going on to pacify the natives that there might be some flaky credit like this one floating around.
I still think a lot of this will clear up after forcing cc4.13 out for the newer versions, but again, that's just me making a hunch.
It's kind of funny, but trying to guess answers about the project system from an outside perspective is finally forcing me to try out that "black box" testing I was taught when I was studying computer science...
www.clintcollins.org - spouting off at the speed of site
> I was just curious though,
)
> I was just curious though, could it be
> possible that the result we were discussing (wuid=307539) had the credit
> granted manually after the first two results came in and then the third result
> returned a couple of days later and was deemed canonical so it received the
> same amount of credit? It occurred to me that with a lot of manual credit
> granting going on to pacify the natives that there might be some flaky credit
> like this one floating around.
No.
When I switched the scheduler a couple of weeks ago, I did manually grant credit for some workunits (WU) that had to be cancelled since they were not compatible with the new scheduler. But I haven't done this for any of the new WU issued since the scheduler was changed. All WU in the database are of this form.
Bruce
Director, Einstein@Home
So to summarise we have:
)
So to summarise we have:
A validator picking random amounts of claimed credit to grant.
A faulty app (v4.71) which reports only the CPU time since the WU was resumed, thus claiming lower credit.
And I have seen quorums of 3 and 4, so i'm not sure what the official size is meant to be.
And on the faulty app point, it is my understanding that WU's that were intended to be processed with this app will only be processed with this app. Later WU's that are required to use v4.72 automatically download that app when it is required. So all of the WU's that were designed for 4.71 will process with that app no matter what BOINC version you are running.
Catch your own wave...
> So to summarise we have: >
)
> So to summarise we have:
> A validator picking random amounts of claimed credit to grant.
> A faulty app (v4.71) which reports only the CPU time since the WU was
> resumed, thus claiming lower credit.
> And I have seen quorums of 3 and 4, so i'm not sure what the official size
> is meant to be.
>
> And on the faulty app point, it is my understanding that WU's that were
> intended to be processed with this app will only be processed with this app.
> Later WU's that are required to use v4.72 automatically download that app when
> it is required. So all of the WU's that were designed for 4.71 will process
> with that app no matter what BOINC version you are running.
We are still testing...
> I've participated in all
)
> I've participated in all the Projects so far & have always felt Seti gave
> out the most credit per WU for the time it takes to Crunch the WU's, which
> project gives more per WU than Seti ... ???
>
> I know at Seti is where I can reach the highest RAC (Around 4300) (Compared to
> only about 2500 at the other projects) so I just always assumed Seti was
> giving out the more credit than the other projects & other people have
> stated about the same thing ...
>
>
Apart what I have calculated (or noticed):
Seti: about 10 credits per hour
CPDN: about 15 credits per hour
LHC: about 8 credits per hour
Predictor: 8-9 credits per hour
Einstein: 8-9 credits per hour (not enough results to be significant)
pirates: about 4-7 credits per hour (worsest project to get work!!!)
CPDN: I need for 1 "trickle" (=94.xx credits) 7 hours and 20 minutes, that makes aprox. 15 credits per hour.!!!
Now which project gives the most credits? - no offense -
greetz from Switzerland
littleBouncer
Perhaps a good question but i
)
Perhaps a good question but i don't see it that way.
My question wouldn't be "How do it get most credit" but "Which CPU put to particular project". e.g.:
CPDN and SETI does very good on Intel P4; AMD was very good on CPDN classic.
Einstein and Predictor does good on AMD.
I think one can not say so in general: a type of CPU must be taken into account. Also note, that Linux was found to be faster on CPDN than M$ Win.
Again - no offense - just a point of view...
> Apart what I have calculated (or noticed):
>
> Seti: about 10 credits per hour
> CPDN: about 15 credits per hour
> LHC: about 8 credits per hour
> Predictor: 8-9 credits per hour
> Einstein: 8-9 credits per hour (not enough results to be significant)
> pirates: about 4-7 credits per hour (worsest project to get work!!!)
>
> CPDN: I need for 1 "trickle" (=94.xx credits) 7 hours and 20 minutes, that
> makes aprox. 15 credits per hour.!!!
>
> Now which project gives the most credits? - no offense -
>
> greetz from Switzerland
> littleBouncer
>
>
>
>
>
I haven't run CPDN in awhile
)
I haven't run CPDN in awhile littlebouncer so you could be right, I heard they did raise the Credit Per Trickle over there some time ago. I was just going by when they where only giving out 76.4 Credits Per Trickle. But I still think Seti gives out more Credit than any of the other Projects or at the very least it's running neck and neck with CPDN ...
Your figure of 10 credits per hour for Seti is way lower than what I was getting over there, I was getting more like 15 per hour which would put it neck & neck with CPDN now ... But that could be the differences in our Computers ... Friendly :)
@ Honza, I have run all the
)
@ Honza, I have run all the Projects to some extent and as far as I'm concerned the Intel P4's with HyperThreading are hard to beat at any of them. Unless you have a Dual CPU Computer like the Xeon's or Opteron then you will do just fine at any of the Projects with the P4 HT CPU's by running them in HT Mode ...
That's correct - Intel's
)
That's correct - Intel's HyperThreading gives another 15-20% of extra performace.
It takes about 11 hours to complete an Einstein WU on P4/3GHz/HT and about 6 hours on AMD64 3000+. If i'm correct, both are doing about 4-5WUs/day.
I'm not saying that any CPU is better.
I'm not looking for what CPU gives most credit - that's a bit unimportant to me.
My idea and experience is that P4s are doing great on CPDN/BOINC (an Intel optimalized compiler), AMD was doing great on CPDN Classic (with no optimalization in code).
It may be usefull to put P4s on some project and AMDs to another to maximize it's performance; measurement must be in WUs/day not the credit.
You may be right that P4 is a good option when running any BOINC project (regardless of high power consumption, high temperatures etc.).
> @ Honza, I have run all the Projects to some extent and as far as I'm
> concerned the Intel P4's with HyperThreading are hard to beat at any of them.
> Unless you have a Dual CPU Computer like the Xeon's or Opteron then you will
> do just fine at any of the Projects with the P4 HT CPU's by running them in HT
> Mode ...
>