Take the 2-minute tour ×
Bitcoin Stack Exchange is a question and answer site for Bitcoin crypto-currency enthusiasts. It's 100% free, no registration required.

The FAQ says:

Why don't we use calculations that are also useful for some other purpose?

To provide security for the Bitcoin network, the calculations involved need to have some very specific features. These features are incompatible with leveraging the computation for other purposes.

That's a rather generic explanation. What are these "very specific features" that Bitcoin calculations need?

share|improve this question

3 Answers 3

up vote 14 down vote accepted

I can see that you are talking about the mining process and the SHA hashing algorithm. From what I understand that function needs to be:

  1. Scalable - one needs to be able to precisely adjust how much work needs to be put in for a desired amount. This is done through the use of Target/Difficulty and uniformity of the output of the SHA function (it produces more or less a smooth distribution of random numbers).

  2. The function needs to be quick to verify and always verify the same way. Moreover, the function needs to be always verifiable and not rely on third party - generally, you need to process a lot of blocks in order to catch up to the network, so there is no point in it taking too long. Also, there is no ambiguity in determining whether a given hash fulfils the preconditions as would happen with some functions ("is this protean optimally folded?"). Morever, floating point calculations are not too good as different computers could have different rounding errors. As for being always verifiable and not relying on third party - Bitcoin is a decentralised currency and thus it cannot remain that way should it be dependant on some company or website operating it. Every peer needs to be able to independently verify the solution with minimal amount of software and hardware. Scientific simulations can rarely fulfil this goal.

  3. The function needs to work one way and have no cryptographic weaknesses - generally, you don't want people to reach the result quicker than is fair. Hashing functions are the types of functions where one needs to perform the calculations in order to receive the results and can't get anywhere knowing the solution. SHA-256 is secure and for now doesn't have any weaknesses.

  4. The output of the function should depend on every part of the input. Moreover, changing even one byte in the input should change the output completely. - Generally, the function needs to be very sensitive to tempering. Any change on the input must change the output so as to prevent malicious manipulation of data. Similarly to ensure that the function is scalable and nobody can "tinker" with the data a bit to produce another valid result from a previous solution, the output must appear to be pseudo random and any change of the input must change the entire output.

share|improve this answer
    
Don't you think some of the points I made in my answer are valid as well? I think yours is better, but a few things I mentioned are requirements too - add them to your answer and I'll accept. –  ripper234 Dec 6 '12 at 6:21
    
@ripper234 I wrote the answer after having skimmed yours. I added a point about the input/output change. Lets see... Number 1 is mainly a feature of the blockchain, not really hashing function and is encompassed in my last point. Number 2 is covered with points 3 and 4. Number 3 I think I should add to point 2. –  ThePiachu Dec 6 '12 at 8:06
    
I think point 1 is not 'mainly a feature of the blockchain', but rather it helps explain why a simple list of useful problems can't be used as hash targets - because people might pre-calculate them (there might be work arounds to prevent this kind of pre-calc, but still I think it's an important requirement of the whole hashing scheme ... not just the hash function specifically). –  ripper234 Dec 6 '12 at 8:19

The requirements of the Bitcoin calculation function:

  1. Every calculation must depend on the previous calculation. This is what makes each block strengthen the security of previous blocks.

  2. Every calculation should be impossible to know, or even make "partial progress" on, before the previous block is found and published. This is done to make sure to prevent pre-calculation of blocks (by satoshi or anyone else) - it makes sure the "lottery ticket" that is finding a block is split only between miners currently working on this problem (that there are no "hidden lottery tickets").

  3. There should not be any central source that affects the types of calculations (e.g. a central server sending protein folding problems). The conditions for the calculation should be knowable from the peer to peer network (a central server is a single point of failure).

We do not know of any "useful" calculation (e.g. SETI@HOME, FOLDING@HOME) that abides by these criteria, so random "useless" hashes were chosen.

share|improve this answer

Bitcoin mining exists first and foremost to synchronize transactions; it does so by allowing miners to acknowledge that they consider some transactions valid, by provably doing a calculation which references these transactions somehow. This requires the calculation to have the following properties:

  1. The most important requirement by far as that it is possible to take arbitrary data, and from it generate algorithmically a problem to be solved; the problem must depend on the data in such a way that without knowing the exact data, it is impossible to make any progress towards the solution to the problem.

    If you could not derive a problem from arbitrary data, you could not take transaction data (which is arbitrary) and from it present a problem whose solution acknowledges the transactions; the derivation must be algorithmic, otherwise there would be need for some central entity to come up with the problem; and no work should be possible before knowing what the transactions are, otherwise the work cannot meaningfully acknowledge the particular set of transactions.

  2. The problem must be much easier to verify than to solve. Otherwise, a lot of overhead will be created in operating the network after the problems are originally solved.

  3. It should be possible to freely scale the difficulty of the problems. Otherwise, as the computational power of the network changes, the solutions could arrive too quickly (causing forking and overhead) or too slowly (delaying the confirmation of transactions).

  4. We need to have sufficient confidence that the problems are indeed as difficult as we think they are. Otherwise, a party finding a significant shortcut could have an unexpected and unchallenged advantage.

Requirement 1 is the most important, and also the most unlikely to be satisfied by "useful" problems - real-world applications generally require a central domain expert to issue relevant problems.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.