Great Firewall of China Slows Down Bitcoin Network ...
Great Firewall of China Slows Down Bitcoin Network ...
Why the Great Firewall of China Is ... - Bitcoin Magazine
Everyone says the "Great Firewall of China" is slowing ...
Great Firewall of China Slows Down Bitcoin Network
Studie: China könnte Bitcoin zerstören futurezone.at
Everyone says the "Great Firewall of China" is slowing Bitcoin. Why isn't there a minimum ping for running a Bitcoin node or miner? Why isn't some form of self-enforcing QoS already a part of Bitcoin?
North Korea always says it's because of the "Great Firewall of China" that's keeping blocksize small. Why isn't there some form of "you must be this tall (ping) to ride this ride" built into the protocol?
Everyone says the "Great Firewall of China" is slowing Bitcoin. Why isn't there a minimum ping for running a Bitcoin node or miner? Why isn't some form of self-enforcing QoS already a part of Bitcoin? /r/btc
Bitcoin is slow because the block size was left at 1MB - 2MB with Witness Data on the SEGWIT network - after throwing the entire "team" developer of GitHub and being occupied by developers of what is now known as Blockstream. This size has been maintained and keeps referring to two issues: Mining in China and the decentralization of the nodes or transaction validators that you point out in the article. Mining in China occupies a good part of the pie that miners distribute - in turn these are the ones that confirm the transactions and undermine the blocks - since 2011 and these Chinese farms are behind something that in the West call "The Great Firewall "that prevents a stable connection and slows down the propagation of the block, its mining and confirmation of the transaction over 3 minutes   causing a large part of the mining coming from China and therefore the power of 'Hash' decreased drastically affecting the security of Bitcoin; The less Hash the greater the possibility of being attacked by the Bitcoin network through a 51% attack that could cause double spending - although this gives rise to many debates since the 51% attack on an already "mature" network like Bitcoin requires a Considerable expenditure on mining equipment to control 51% of the mining power and receiving the block reward and the commissions for confirmed transfer on each block would make it less likely that said miner or mining group would like to make a double expense upon receiving sufficient economic compensation. So only a malicious agent with the intentions of destroying the network and assuming the total losses on the investment of equipment would be willing to carry out such operation. Possibilities exist but these are reduced by being the miner compensated for their activity. In the same references to Chinese mining farms but in another more economical field; Bitcoin has 21 million that are obtained through mining and commissions on transfers. These 21 million are achieved over time and from there it becomes a deflationary element as there is no possibility of printing more coins. The question of the Bitcoin block costly and the influence of Chinese mining goes through the Bitcoin subsidy or, currently called as, block reward: When a miner puts a block in the chain he receives the Bitcoin reward that is "inside" "of that block and which is currently encrypted in 12.5. Every 210000 blocks the reward is reduced by half so in less than a year (312 days from today ) it will be reduced to 6.25 so the miners will see their subsidy fall in half unless Bitcoin's price per coin increases considerably or the mining farms begin to close or reduce mining equipment thus decreasing the power of the network's Hash. If Bitcoin reduces by half every 210000 blocks the subsidy per block to miners will come a time when they can only live and maintain their equipment for transaction fees and in a Bitcoin network with 7 transactions per second and a commission that tends to Increase the higher the number of movements in it makes it unfeasible for miners to continue in said 1MB network and above all that people want to use this payment method that is expensive and slow - more even than gold paper - Because remember that Bitcoin born as Peer 2 peer cash, not gold-. Therefore, if in time the subsidy or reward is going to be 0 or unable to cover the mining equipment expense, it is necessary to find a solution if the developers do not want to touch the block size. And this goes through three issues already raised in BIPs and about the community: RPF (Replace By Fee), Lightning Network and Increase in the number of Bitcoin since the demand for Bitcoin does not rise because it offers a quality service but for security and above all for the manipulation of Tether (USDT) and the large exchange houses: - The RBF consists in the substitution of a transaction without confirmations for another that would replace it with a higher commission eliminating the previous one of the mempool - the limbo of the transactions to be confirmed in Bitcoin -. Although this system seems effective, it does not eliminate the long-term problem of continuing to maintain the reduced block, but rather removes the problem of financing miners, but does not eliminate it and, above all, kills the operation of Bitcoin transactions by not eliminating the increase in commissions that would distance the user from its use. In addition to more easily allowing double spending  . - Lightning Network is a side-chain or second layer, that is, a software development not implemented in the Bitcoin network itself and therefore is not an element of the block chain so this should already be repudiated since being a External and non-auditable element such as Bitcoin gives rise to "blanks" and therefore lack of existence and possibility of auditing accounts  and even the loss of money or cancellation of the transaction  . It also faces the problem of routing since in a network in constant change with the openings and closures of payment channels it is unfeasible to establish a total and rapid diffusion to the nodes of LN - other than those of Bitcoin - so it comes into play Another new element of this network is the watchtowers in charge of ensuring compliance in open channels and over the entire LN network of payments. Obviously it requires an additional cost to hire this service and it is not yet implemented  and taking into account the pace at which Lightning Network is developed, it is doubtful that it will become available . In short, to use properly - which is not successful - LN you need a node valued at $ 300 , a watchtower, have a channel open 24/7 and with sufficient funds to carry out transactions    . - The increase in the Bitcoin offer was raised fleetingly by developer Peter Todd   and will become an open debate in a few years when the mining block reward is low and the price of Bitcoin cannot be sustained only with uncontrolled printing of Tether and the manipulation on the price of the currency   next to the collusion of the exchange houses headed by BitFinex  and personalities of the world 'crypto'  - if he survives long enough to see that moment since they are already behind Bitfinex for money laundering . When that moment arrives I am sure that a BIP - Bitcoin Improvement Proposal - will be launched by Blockstream or directly notified of the measure destroying the essence of Bitcoin and the TRUE DECENTRALIZATION: THE PROTOCOL. This brings us to the second reason for the slowness of Bitcoin. The correct and true decentralization goes through the code and the team of developers and maintainers, not any other. The protocol must be engraved in stone  and that the action of the miners distribute and decentralize the network and they maintain the nodes and the transactions in a completely capitalist economic relationship. Investing in machines and communication improves access, speed and spread of transactions and blocks and makes miners true competitors as well as facilitating the transmission of money and all kinds of transactions . The decentralization of the nodes was the other great reason to prevent the increase of the block and therefore the speed in the transaction. It is based on a false premise to base the decentralization of Bitcoin - which is nowhere on the whitepaper - on the raspberry nodes. The dispersion of the transaction and all the stages of the transaction and the blocks depend on the miner and his team, as well as the search for excellence in communications to avoid orphan blocks - which are stipulated in the Nakamoto consensus and are part of Bitcoin and not they throw no problem in the transactions only in the resolution of the reward of the block that affects the miners and should seek greater efficiency - and reorganizations. The audit on the Bitcoin network can be perfectly performed without there being a Bitcoin node in each house, in fact it would cause the same routing problems that occur / will occur in the LN network. Decentralization should not go through nodes but through developers and to a lesser extent by miners. If a protocol is continually being altered by developers they have the power of the network and it must be in constant struggle by the miners through the commission on transactions. Due to these two factors, the BIP0101 proposed by the developers that Satoshi left in charge  and that originated the creation of Bitcoin Unlimited was rejected, later it was attacked due to its recent creation through DDoS attacks in a statement of intentions of the network Blockstream bitcoin   remaining as a residual element. These two reasons are the cause of the drowning suffered by the Bitcoin network - including many other elements that were eliminated and that corresponded to the initial code completely changing the nature and destiny of Bitcoin that are not relevant and I will not enumerate -, Any other reason is propaganda by those who want to keep Bitcoin drowned in order to enrich themselves with mining sub-subsidies and second-layer software like LN. Bitcoin has a structure similar to gold and can collect certain attributes of it but its destination in efficient and fast transmission as effective - among other transactions. Bitcoin was designed to professionalize miners and create a new industry around them, so mining centers will become datacenters  and they will replicate all transaction logs and even this professionalization will eventually lead to specialization in other types of transactions born new industries around you that will support the nodes according to specialization - Data, asset transfers, money, property rights, etc ... - Bitcoin scales to infinity if they leave the protocol FREE enough to do so. P.D: Core, since the departure of Hearn and Andersen, they know perfectly well what they are doing: The worst breed from the Cyberpunk movement has been combined with the worst breed of the current synarchy; The ends always touch.  https://np.reddit.com/btc/comments/3ygo96/blocksize_consensus_census/cye0bmt/  https://www.youtube.com/watch?v=ivgxcEOyWNs&feature=youtu.be&t=2h36m20s  https://www.bitcoinblockhalf.com/  https://petertodd.org/2016/are-wallets-ready-for-rbf  https://www.ccn.com/bitcoin-atm-double-spenders-police-need-help-identifying-four-criminals/  https://bitcointalk.org/index.php?topic=4905430.0 https://www.trustnodes.com/2018/03/26/lightning-network-user-loses-funds || https://www.trustnodes.com/2019/03/13/lightning-network-has-many-routing-problems-says-lead-dev-at-lightning-labs  https://diar.co/volume-2-issue-25/  https://blockonomi.com/watchtowers-bitcoin-lightning-network/  https://twitter.com/starkness/status/676599570898419712  https://store.casa/lightning-node/  https://bitcoin.stackexchange.com/questions/81906/to-create-a-channel-on-the-lightning-network-do-you-have-to-execute-an-actual-t  https://blog.muun.com/the-inbound-capacity-problem-in-the-lightning-network/  https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e  https://dashnews.org/peter-todd-argues-for-bitcoin-inflation-to-support-security/  https://twitter.com/peterktodd/status/1092260891788103680  https://medium.com/datadriveninvestotether-usd-is-used-to-manipulate-bitcoin-prices-94714e65ee31  https://twitter.com/CryptoJetHammestatus/1149131155469455364  https://www.bitrates.com/news/p/crypto-collusion-the-web-of-secrets-at-the-core-of-the-crypto-market  https://archive.is/lk1lH  https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=8W00ssb7x5ZOaj8HKFdbfQ==  https://bitcointalk.org/index.php?topic=195.msg1611#msg1611  https://github.com/bitcoin/bips/blob/mastebip-0101.mediawiki  https://www.reddit.com/bitcoinxt/comments/3yewit/psa_if_youre_running_an_xt_node_in_stealth_mode/  https://www.reddit.com/btc/comments/3yebzi/coinbase_down/ https://bitcointalk.org/index.php?topic=532.msg6306#msg6306"
Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)
We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know. Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here: https://youtu.be/tPImTXFb_U8
Connor: 02:19.68,0:02:45.10 Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started? Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98 Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today? Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59 I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing? Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00 I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like? Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45 Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size? Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27 Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network. Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84 One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution? Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60 There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled? Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62 Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently? Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49 Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check? Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46 *Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November? Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61 Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it? Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63 Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing. Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
Breaking: Tiananmen Square Massacre of 1989 information strictly prohibited by the CHines Government has been embedded in the Bitcoin Blockchain. Buckle up folks, this may get interesting.
Yesterday I proposed a possible method to end the Chinese Bitcoin Mining Monopoly by embedding pro-freedom/anti-Chinese government tyranny information prohibited by the Chinese Government on the Bitcoin Blockchain. Original thread here: https://np.reddit.com/Bitcoin/comments/60apqg/a_proposal_for_a_simple_inexpensive_and_effective/ Here is the text embedded in the Bitcoin Blockchain: 中国：应公布六四屠杀真相 1989后打压人权在习近平任内达高峰 （纽约，2016年6月2日）－人权观察今天表示，中国政府应停止否认国家在1989年6月4日前后屠杀无武装民运人士和市民事件中的角色，承认政府应对于与镇压该示威活动有关的杀人、拘押和迫害行为负起责任。 Tiananmen Square, Beijing in June 1989. 展开 天安门，北京，1989年6月 中国政府应展现诚意，立即停止拘押和骚扰纪念「六四」人士，会见幸存者及其家属，并释放因追悼「六四」而自2014年7月被关押至今的维权人士于世文。 “中国当局应将其亏欠的正义与究责还给屠杀幸存者及其家属，” 人权观察中国部主任索菲・理查森（Sophie Richardson）说。“1989年迄今政治打压，不但未能遏止要求基本自由与负责政府的呼声，反而使中共的合法性加倍流失。” 和往年同样， 当局已在「六四」周年来临前提升戒备，严防出现悼念活动： 2016年5月28日，成都当局以煽动颠覆罪名拘捕符海陆，他被怀疑在社交媒体发布贴有与「六四」有关标签的酒瓶图片。 据维权网披露，另有至少四人因为纪念「六四」而被警方拘留，包括成都诗人马青和北京维权人士徐彩虹、赵长青、张宝成。 当局并将多名维权人士软禁或限制行动，包括天安门母亲发起人丁子霖和山东退休教授孙文广。 著名记者高瑜虽在2015年11月获准保外就医出狱，仍须在家中服完五年刑期；一直受到实质软禁的前中共高干鲍彤，则被强迫以「旅游」名义离开北京。 1989年至今，中国政府一直违背国内法和国际人权法义务，严格限制基本人权──特别是言论、集会、结社自由和参政权。然而，对异议人士的不容忍，自2013年3月习近平掌权后更达高峰。中国政府正研拟或已通过数项新的国家安全法律，加强对公民社会的限制和管控；互联网和媒体言论空间受到进一步紧缩；数百名维权人士遭到拘押和判刑；意见领袖和自由派知识分子被刻意起诉；同时，政府还大力推行党领导一切的“正确思想”。 虽然最后一位因参与八九民运入狱人士可望于2016年10月刑满释放，但有许多当年示威者出狱后继续从事维权活动而再被关押。1989年组织广州民运活动而坐牢18个月的于世文，即因悼念「六四」而于2014年被拘押至今。其他资深维权人士，包括诺贝尔和平奖得主刘晓波、四川维权人士刘贤斌、陈卫和广东维权人士郭飞雄，分别被判处重刑或以政治罪名遭到羁押。 中国当局应将其亏欠的正义与究责还给屠杀幸存者及其家属 理查森 中国部主任, 人权观察 当局防范「六四」议题的另一方式，是禁止屠杀后逃亡海外的八九民运组织者或参与者返国。例如，前学生领袖吾尔开希、熊焱至今归国无门，两人虽曾在2013到2014年屡次闯关，但均遭香港当局拒绝入境。 中国政府持续否认屠杀和平示威者，敌视和平的民众参与，与其他地方的发展形成强烈对比。在2016年5月的就职演说中，台湾新任总统蔡英文宣示将以成立真相与和解委员会的方式“面对过去”，俾能记取“那个时代的错误”──她指的应是所谓“白色恐怖”时期的政治迫害。缅甸经历50年军事独裁后，现在也已开始向选举民主转型。 背景：1989血腥镇压 天安门屠杀缘起于学生、工人和其他群众，为呼吁言论自由、责任政治和扫除腐败，于1989年4月在北京天安门广场及各大城市发起和平集会。随着示威活动日益扩大，政府在1989年5月下旬宣布戒严。 1989年6月3日到4日，军队开火杀害不明人数的和平示威者和旁观者。在北京，有部分市民为反击军方暴力而攻击运兵车队，焚烧交通工具。屠杀后，政府实施全国性镇压，以“反革命”和扰乱社会秩序、纵火等刑事罪名逮捕数千人。 中国政府从未承认对屠杀负有责任，也未曾将任何杀人凶手移送法办。它既拒绝对事件进行调查，也不愿公布关于死亡、受伤、失踪或服刑者的数据。主要由死难者家属组成的一个非政府组织，天安门母亲，收集了202名在北京和其他城市遭镇压死亡者的详细资料。27年过去了，许多天安门母亲成员身患病痛，部分已经去世，却未能见到正义伸张，也不知道他们的亲人究竟如何罹难。 人权观察呼吁中国政府把握「六四」27周年的机会，彻底改变官方对此事件的立场。具体而言，它应做到： 尊重言论、结社与和平集会自由权，停止骚扰及任意拘押质疑「六四」官方说法的人士； 与天安门母亲成员会面，并向他们道歉； 允许对「六四」事件进行独立、公开的调查，并尽速将结果公诸大众； 允许因「六四」流亡海外的中国公民自由返国；以及 调查所有参与策划或指挥非法利用致命武力对付和平示威者的官员和军官，并公布死难者名单。 “自1989年以来，中国在政治改革方面不仅毫无进展，反而在原地踏步甚至向后退却，”理查森说。“北京要想向前跃进，就必须正视过去的伤痛。这不但有其他怀抱自信政府的先例可循，也是全中国的民心所向。” “自1989年以来，中国在政治改革方面不仅毫无进展，反而在原地踏步甚至向后退却，”理查森说。“北京要想向前跃进，就必须正视过去的伤痛。这不但有其他怀抱自信政府的先例可循，也是全中国的民心所向。” 区域／国家 亚洲 中国和西藏 主题 言论自由 China: Tell the Truth About Tiananmen on Anniversary Repression of Rights at Post-1989 Peak Under President Xi (New York) – The Chinese government should cease its denial about the state’s role in the massacre of unarmed pro-democracy protesters and citizens around June 4, 1989, and acknowledge the government’s responsibility for the killings, detentions, and persecution associated with suppression of the protests, Human Rights Watch said today. Tiananmen Square, Beijing in June 1989. Beijing should demonstrate that commitment by immediately ceasing its detention and harassment of individuals marking the occasion, meeting with survivors and their family members, and releasing Yu Shiwen, an activist held since July 2014 for commemorating the massacre. “Chinese authorities owe a debt of justice and accountability to survivors of the massacre and their family members,” said Sophie Richardson, China director. “Political repression since 1989 has not eliminated yearnings for basic freedoms and an accountable government – instead it has only compounded the Party’s lack of legitimacy.” As in previous years, authorities have been on high alert ahead of the anniversary to preempt commemorations of the massacre: In Chengdu on May 28, 2016, authorities detained Fu Hailu on subversion charges; he is suspected of posting on social media images of liquor bottles with labels related to the crackdown. At least four others – poet Ma Qing in Chengdu, and activists Xu Caihong, Zhao Changqing, and Zhang Baocheng in Beijing – are believed to be in police custody for commemorating the occasion, according to the nongovernmental organization Chinese Human Rights Defenders. Authorities have also put under house arrest or restricted the movement of a number of activists, including Ding Zilin, a founding member of the Tiananmen Mothers, and retired Shandong professor Sun Wenguang. Prominent journalist Gao Yu, who in November 2015 was released from prison on medical parole to serve out her five year sentence at home, and former top official Bao Tong, who remains under effective house arrest, have been required to leave Beijing for enforced “vacations.” Since 1989, the Chinese government has kept tight control over basic human rights – particularly freedoms of expression, assembly, and association, and the right to political participation – despite its obligations under domestic and international human rights law. Intolerance toward dissent, however, has reached a new peak since President Xi Jinping came to power in March 2013. The government has drafted or promulgated new state security laws that put in place more restrictive controls over civil society; further curtailed expression on the Internet and media; detained and imprisoned hundreds of activists in successive waves of arrests; targeted for prosecution public opinion leaders and liberal thinkers; and aggressively promoted the “correct ideology” of Party supremacy. While the last individual known to be imprisoned for his involvement in the 1989 protests will be released in October 2016, many who were involved in the demonstrations and who continued their activism after their release have been re-incarcerated. Yu Shiwen, who spent 18 months in prison for his 1989 work organizing pro-democracy efforts in Guangzhou, has been detained since 2014 for commemorating the massacre that year. Other veteran activists, including Nobel Peace Prize winner Liu Xiaobo, Sichuan activists Liu Xianbin and Chen Wei, and Guangdong activist Guo Feixiong are either serving long prison sentences or have been detained on political charges. Chinese authorities owe a debt of justice and accountability to survivors of the massacre and their family members. Authorities have also prevented discussions about the massacre by blocking organizers of, or participants in, the 1989 protests from returning from other countries where they sought refuge in the aftermath of the massacre. Former student leaders Wuer Kaixi and Xiong Yan, for example, have been unable to re-enter China. Their repeated attempts to return in 2013 and 2014 were rejected by Hong Kong authorities. The Chinese government’s continued denial of the massacre of protesters and hostility toward peaceful political participation contrast sharply with developments elsewhere. In her May 2016 inaugural address, Tsai Ing-wen, Taiwan’s new president, vowed to “face the past” by setting up a new Truth and Reconciliation Commission to investigate “mistakes” of “the era” – which likely refers to the period of political repression known as the White Terror. After five decades of military dictatorship, Burma has begun a transition to electoral democracy. Background: Bloodshed in 1989 The Tiananmen massacre was precipitated by the peaceful gatherings of students, workers, and others in Beijing’s Tiananmen Square and other cities in April 1989 calling for freedom of expression, accountability, and an end to corruption. The government responded to the intensifying protests in late May 1989 by declaring martial law. On June 3 and 4, the military opened fire and killed untold numbers of peaceful protesters and bystanders. In Beijing, some citizens attacked army convoys and burned vehicles in response to the military’s violence. Following the killings, the government implemented a national crackdown and arrested thousands of people for “counter-revolution” and other criminal charges, including disrupting social order and arson. The government has never accepted responsibility for the massacre or held any perpetrators legally accountable for the killings. It has refused to conduct an investigation into the events or release data on those who were killed, injured, disappeared, or imprisoned. The nongovernmental organization Tiananmen Mothers, consisting mostly of family members of those killed, has established the details of 202 people who were killed during the suppression of the movement in Beijing and other cities. Twenty-seven years on, many members of the Tiananmen Mothers are ailing and some have died without seeing justice or knowing precisely what has happened to their family members. Human Rights Watch called on the Chinese government to use the opportunity of the 27th anniversary of June 4, 1989, to reverse its current position on the event. Specifically, it should: Respect the rights to freedom of expression, association, and peaceful assembly and cease the harassment and arbitrary detention of individuals who challenge the official account of June 4; Meet with and apologize to members of the Tiananmen Mothers; Permit an independent public inquiry into June 4, and promptly release its findings and conclusions to the public; Allow the unimpeded return of Chinese citizens exiled due to their connections to the events of 1989; and Investigate all government and military officials who planned or ordered the unlawful use of lethal force against peaceful demonstrators, and publish the names of all those who died. “Instead of advancing, China has stagnated, and even regressed, in terms of political reforms since 1989,” Richardson said. “Beijing can only move forward by facing up to its painful past, as others have had the confidence to do, and as people across China clearly want.” At the very least, this is proving the concept of information delivery and storage utilizing the Bitcoin Blockchain bypassing international borders and laws. You can verify that this has in fact been embedded in the Bitcoin Blockchain yourself here: http://www.cryptograffiti.info/#4901 What reaction do you think the tyrannical Chinese Government will have to this information being distributed within China by Bitcoin Miners hosting nodes inside of the Great Firewall of China? If this continues, at some point they will certainly take action to close this avenue of Freedom of Speech. Will they force the Miners to adopt a fork which rolls back the Blockchain to scrub this prohibited information from the Bitcoin Blockchain, and thus effectively create a new altcoin? Who will follow this new blockchain, and who will follow the original?
Blockstream CTO Greg Maxwell u/nullc, February 2016: "A year ago I said I though we could probably survive 2MB". August 2017: "Every Bitcoin developer with experience agrees that 2MB blocks are not safe". Whether he's incompetent, corrupt, compromised, or insane, he's unqualified to work on Bitcoin.
Here's Blockstream CTO Greg Maxwell u/nullc posting on February 1, 2016:
"Even a year ago I said I though we could probably survive 2MB" - nullc
Meanwhile, there is one thing we do know with certainty: Blockstream CTO Greg Maxwell u/nullc is either incompetent or corrupt or compromised or insane - or some combination of the above. Therefore Blockstream CTO Greg Maxwell u/nullc is not qualified to be involved with Bitcoin. Background information
"Even a year ago I said I though we could probably survive 2MB" - nullc ... So why the fuck has Core/Blockstream done everything they can to obstruct this simple, safe scaling solution? And where is SegWit? When are we going to judge Core/Blockstream by their (in)actions - and not by their words?
Previously, Greg Maxwell u/nullc (CTO of Blockstream), Adam Back u/adam3us (CEO of Blockstream), and u/theymos (owner of r\bitcoin) all said that bigger blocks would be fine. Now they prefer to risk splitting the community & the network, instead of upgrading to bigger blocks. What happened to them?
Core/Blockstream is living in a fantasy world. In the real world everyone knows (1) our hardware can support 4-8 MB (even with the Great Firewall), and (2) hard forks are cleaner than soft forks. Core/Blockstream refuses to offer either of these things. Other implementations (eg: BU) can offer both.
Overheard on r\bitcoin: "And when will the network adopt the Segwit2x(tm) block size hardfork?" ~ u/DeathScythe676 // "I estimate that will happen at roughly the same time as hell freezing over." ~ u/nullc, One-Meg Greg mAXAwell, CTO of the failed shitty startup Blockstream
Either Greg Maxwell - an insane, toxic dev who denies reality - decides the blocksize.
Or the market decides the blocksize.
The debate is not "SHOULD THE BLOCKSIZE BE 1MB VERSUS 1.7MB?". The debate is: "WHO SHOULD DECIDE THE BLOCKSIZE?" (1) Should an obsolete temporary anti-spam hack freeze blocks at 1MB? (2) Should a centralized dev team soft-fork the blocksize to 1.7MB? (3) OR SHOULD THE MARKET DECIDE THE BLOCKSIZE?
"Either the main chain will scale, or a unhobbled chain that provides scaling (like Bitcoin Cash) will become the main chain - and thus the rightful holder of the 'Bitcoin' name. In other words: Either Bitcoin will get scaling - or scaling will get 'Bitcoin'." ~ u/Capt_Roger_Murdock
Bitcoin Original: Reinstate Satoshi's original 32MB max blocksize. If actual blocks grow 54% per year (and price grows 1.542 = 2.37x per year - Metcalfe's Law), then in 8 years we'd have 32MB blocks, 100 txns/sec, 1 BTC = 1 million USD - 100% on-chain P2P cash, without SegWit/Lightning or Unlimited
ELI85 BCC vs BTC, for Grandma (1) BCC has BigBlocks (max 8MB), BTC has SmallBlocks (max 1-2?MB); (2) BCC has StrongSigs (signatures must be validated and saved on-chain), BTC has WeakSigs (signatures can be discarded with SegWit); (3) BCC has SingleSpend (for zero-conf); BTC has Replace-by-Fee (RBF)
Bitcoin Cash is the original Bitcoin as designed by Satoshi Nakamoto (and not suppressed by the insane / incompetent / corrupt / compromomised / toxic Blockstream CTO Greg Maxwell). Bitcoin Cash simply continues with Satoshi's original design and roadmap, whose success has always has been and always will be based on three essential features:
high on-chain market-based capacity supporting a greater number of faster and cheaper transactions on-chain;
strong on-chain cryptographic security guaranteeing that transaction signatures are always validated and saved on-chain;
prevention of double-spending guaranteeing that the same coin can only be spent once.
This means that Bitcoin Cash is the only version of Bitcoin which maintains support for:
BigBlocks, supporting increased on-chain transaction capacity - now supporting blocksizes up to 8MB (unlike the Bitcoin-SegWit(2x) "centrally planned blocksize" bug added by Core - which only supports 1-2MB blocksizes);
StrongSigs, enforcing mandatory on-chain signature validation - continuing to require miners to download, validate and save all transaction signatures on-chain (unlike the Bitcoin-SegWit(2x) "segregated witness" bug added by Core - which allows miners to discard or avoid downloading signature data);
SingleSpend, allowing merchants to continue to accept "zero confirmation" transactions (zero-conf) - facilitating small, in-person retail purchases (unlike the Bitcoin-SegWit(2x) Replace-by-Fee (RBF) bug added by Core - which allows a sender to change the recipient and/or the amount of a transaction, after already sending it).
If you were holding Bitcoin (BTC) before the fork on August 1 (where you personally controlled your private keys) then you also automatically have an equal quantity of Bitcoin Cash (BCC, or BCH) - without the need to do anything.
Many exchanges and wallets are starting to support Bitcoin Cash. This includes more and more exchanges which have agreed to honor their customers' pre-August 1 online holdings on both forks - Bitcoin (BTC) and Bitcoin Cash (BCC, or BCH).
Greg Maxwell /u/nullc (CTO of Blockstream) has sent me two private messages in response to my other post today (where I said "Chinese miners can only win big by following the market - not by following Core/Blockstream."). In response to his private messages, I am publicly posting my reply, here:
Note: Greg Maxell nullc sent me 2 short private messages criticizing me today. For whatever reason, he seems to prefer messaging me privately these days, rather than responding publicly on these forums. Without asking him for permission to publish his private messages, I do think it should be fine for me to respond to them publicly here - only quoting 3 phrases from them, namely: "340GB", "paid off", and "integrity" LOL. There was nothing particularly new or revealing in his messages - just more of the same stuff we've all heard before. I have no idea why he prefers responding to me privately these days. Everything below is written by me - I haven't tried to upload his 2 PMs to me, since he didn't give permission (and I didn't ask). The only stuff below from his 2 PMs is the 3 phrases already mentioned: "340GB", "paid off", and "integrity". The rest of this long wall of text is just my "open letter to Greg." TL;DR: The code that maximally uses the available hardware and infrastructure will win - and there is nothing Core/Blockstream can do to stop that. Also, things like the Berlin Wall or the Soviet Union lasted for a lot longer than people expected - but, conversely, the also got swept away a lot faster than anyone expected. The "vote" for bigger blocks is an ongoing referendum - and Classic is running on 20-25% of the network (and can and will jump up to the needed 75% very fast, when investors demand it due to the inevitable "congestion crisis") - which must be a massive worry for Greg/Adam/Austin and their backers from the Bilderberg Group. The debate will inevitably be decided in favor of bigger blocks - simply because the market demands it, and the hardware / infrastructure supports it. Hello Greg Maxwell nullc (CTO of Blockstream) - Thank you for your private messages in response to my post. I respect (most of) your work on Bitcoin, but I think you were wrong on several major points in your messages, and in your overall economic approach to Bitcoin - as I explain in greater detail below: Correcting some inappropriate terminology you used As everybody knows, Classic or Unlimited or Adaptive (all of which I did mention specifically in my post) do not support "340GB" blocks (which I did not mention in my post). It is therefore a straw-man for you to claim that big-block supporters want "340GB" blocks. Craig Wright may want that - but nobody else supports his crazy posturing and ridiculous ideas. You should know that what actual users / investors (and Satoshi) actually do want, is to let the market and the infrastructure decide on the size of actual blocks - which could be around 2 MB, or 4 MB, etc. - gradually growing in accordance with market needs and infrastructure capabilities (free from any arbitrary, artificial central planning and obstructionism on the part of Core/Blockstream, and its investors - many of whom have a vested interest in maintaining the current debt-backed fiat system). You yourself (nullc) once said somewhere that bigger blocks would probably be fine - ie, they would not pose a decentralization risk. (I can't find the link now - maybe I'll have time to look for it later.) I found the link: https://np.reddit.com/btc/comments/43mond/even_a_year_ago_i_said_i_though_we_could_probably/ I am also surprised that you now seem to be among those making unfounded insinuations that posters such as myself must somehow be "paid off" - as if intelligent observers and participants could not decide on their own, based on the empirical evidence, that bigger blocks are needed, when the network is obviously becoming congested and additional infrastructure is obviously available. Random posters on Reddit might say and believe such conspiratorial nonsense - but I had always thought that you, given your intellectual abilities, would have been able to determine that people like me are able to arrive at supporting bigger blocks quite entirely on our own, based on two simple empirical facts, ie:
the infrastructure supports bigger blocks now;
the market needs bigger blocks now.
In the present case, I will simply assume that you might be having a bad day, for you to erroneously and groundlessly insinuate that I must be "paid off" in order to support bigger blocks. Using Occam's Razor The much simpler explanation is that bigger-block supporters believe will get "paid off" from bigger gains for their investment in Bitcoin. Rational investors and users understand that bigger blocks are necessary, based on the apparent correlation (not necessarily causation!) between volume and price (as mentioned in my other post, and backed up with graphs). And rational network capacity planners (a group which you should be in - but for some mysterious reason, you're not) also understand that bigger blocks are necessary, and quite feasible (and do not pose any undue "centralization risk".) As I have been on the record for months publicly stating, I understand that bigger blocks are necessary based on the following two objective, rational reasons:
because I've seen the empirical research in the field (from guys like Gavin and Toomim) showing that the network infrastructure (primarily bandwidth and latency - but also RAM and CPU) would also support bigger blocks now (I believe they showed that 3-4MB blocks would definitely work fine on the network now - possibly even 8 MB - without causing undue centralization).
Bigger-block supporters are being objective; smaller-block supporters are not I am surprised that you no longer talk about this debate in those kind of objective terms:
bandwidth, latency (including Great Firewall of China), RAM, CPU;
At this point, the burden is on guys like you (nullc) to explain why you support a so-called scaling "roadmap" which is not aligned with:
simple, rational investment policy; and
simple, rational capacity planning
The burden is also on guys like you to show that you do not have a conflict of interest, due to Blockstream's highly-publicized connections (via insurance giant AXA - whose CED is also the Chairman of the Bilderberg Group; and companies such as the "Big 4" accounting firm PwC) to the global cartel of debt-based central banks with their infinite money-printing. In a nutshell, the argument of big-block supporters is simple: If the hardware / network infrastructure supports bigger blocks (and it does), and if the market demands it (and it does), then we certainly should use bigger blocks - now. You have never provided a counter-argument to this simple, rational proposition - for the past few years. If you have actual numbers or evidence or facts or even legitimate concerns (regarding "centralization risk" - presumably your only argument) then you should show such evidence. But you never have. So we can only assume either incompetence or malfeasance on your part. As I have also publicly and privately stated to you many times, with the utmost of sincerity: We do of course appreciate the wealth of stellar coding skills which you bring to Bitcoin's cryptographic and networking aspects. But we do not appreciate the obstructionism and centralization which you also bring to Bitcoin's economic and scaling aspects. Bitcoin is bigger than you. The simple reality is this: If you can't / won't let Bitcoin grow naturally, then the market is going to eventually route around you, and billions (eventually trillions) of investor capital and user payments will naturally flow elsewhere. So: You can either be the guy who wrote the software to provide simple and safe Bitcoin scaling (while maintaining "reasonable" decentralization) - or the guy who didn't. The choice is yours. The market, and history, don't really care about:
whether you yourself might have been "paid off" (or under a non-disclosure agreement written perhaps by some investors associated the Bilderberg Group and the legacy debt-based fiat money system which they support), or
whether or not you might be clueless about economics.
Crypto and/or Bitcoin will move on - with or without you and your obstructionism. Bigger-block supporters, including myself, are impartial By the way, my two recent posts this past week on the Craig Wright extravaganza...
...should have given you some indication that I am being impartial and objective, and I do have "integrity" (and I am not "paid off" by anybody, as you so insultingly insinuated). In other words, much like the market and investors, I don't care who provides bigger blocks - whether it would be Core/Blockstream, or Bitcoin Classic, or (the perhaps confusingly-named) "Bitcoin Unlimited" (which isn't necessarily about some kind of "unlimited" blocksize, but rather simply about liberating users and miners from being "limited" by controls imposed by any centralized group of developers, such as Core/Blockstream and the Bilderbergers who fund you). So, it should be clear by now I don't care one way or the other about Gavin personally - or about you, or about any other coders. I care about code, and arguments - regardless of who is providing such things - eg:
When Gavin didn't demand crypto proof from Craig, and you said you would have: I publicly criticized Gavin - and I supported you.
When you continue to impose needless obstactles to bigger blocks, then I continue to criticize you.
In other words, as we all know, it's not about the people. It's about the code - and what the market wants, and what the infrastructure will bear. You of all people should know that that's how these things should be decided. Fortunately, we can take what we need, and throw away the rest. Your crypto/networking expertise is appreciated; your dictating of economic parameters is not. As I have also repeatedly stated in the past, I pretty much support everything coming from you, nullc:
your crypto and networking and game-theoretical expertise,
your extremely important work on Confidential Transactions / homomorphic encryption.
your desire to keep Bitcoin decentralized.
And I (and the network, and the market/investors) will always thank you profusely and quite sincerely for these massive contributions which you make. But open-source code is (fortunately) à la carte. It's mix-and-match. We can use your crypto and networking code (which is great) - and we can reject your cripple-code (artificially small 1 MB blocks), throwing it where it belongs: in the garbage heap of history. So I hope you see that I am being rational and objective about what I support (the code) - and that I am also always neutral and impartial regarding who may (or may not) provide it. And by the way: Bitcoin is actually not as complicated as certain people make it out to be. This is another point which might be lost on certain people, including:
And that point is this: The crypto code behind Bitcoin actually is very simple. And the networking code behind Bitcoin is actually also fairly simple as well. Right now you may be feeling rather important and special, because you're part of the first wave of development of cryptocurrencies. But if the cryptocurrency which you're coding (Core/Blockstream's version of Bitcoin, as funded by the Bilderberg Group) fails to deliver what investors want, then investors will dump you so fast your head will spin. Investors care about money, not code. So bigger blocks will eventually, inevitably come - simply because the market demand is there, and the infrastructure capacity is there. It might be nice if bigger blocks would come from Core/Blockstream. But who knows - it might actually be nicer (in terms of anti-fragility and decentralization of development) if bigger blocks were to come from someone other than Core/Blockstream. So I'm really not begging you - I'm warning you, for your own benefit (your reputation and place in history), that: Either way, we are going to get bigger blocks. Simply because the market wants them, and the hardware / infrastructre can provide them. And there is nothing you can do to stop us. So the market will inevitably adopt bigger blocks either with or without you guys - given that the crypto and networking tech behind Bitcoin is not all that complex, and it's open-source, and there is massive pent-up investor demand for cryptocurrency - to the tune of multiple billions (or eventually trillions) of dollars. It ain't over till the fat lady sings. Regarding the "success" which certain small-block supports are (prematurely) gloating about, during this time when a hard-fork has not happened yet: they should bear in mind that the market has only begun to speak. And the first thing it did when it spoke was to dump about 20-25% of Core/Blockstream nodes in a matter of weeks. (And the next thing it did was Gemini added Ethereum trading.) So a sizable percentage of nodes are already using Classic. Despite desperate, irrelevant attempts of certain posters on these forums to "spin" the current situation as a "win" for Core - it is actually a major "fail" for Core. Because if Core/Blocksteam were not "blocking" Bitcoin's natural, organic growth with that crappy little line of temporary anti-spam kludge-code which you and your minions have refused to delete despite Satoshi explicitly telling you to back in 2010 ("MAX_BLOCKSIZE = 1000000"), then there would be something close to 0% nodes running Classic - not 25% (and many more addable at the drop of a hat). This vote is ongoing. This "voting" is not like a normal vote in a national election, which is over in one day. Unfortunately for Core/Blockstream, the "voting" for Classic and against Core is actually two-year-long referendum. It is still ongoing, and it can rapidly swing in favor of Classic at any time between now and Classic's install-by date (around January 1, 2018 I believe) - at any point when the market decides that it needs and wants bigger blocks (ie, due to a congestion crisis). You know this, Adam Back knows this, Austin Hill knows this, and some of your brainwashed supporters on censored forums probably know this too. This is probably the main reason why you're all so freaked out and feel the need to even respond to us unwashed bigger-block supporters, instead of simply ignoring us. This is probably the main reason why Adam Back feels the need to keep flying around the world, holding meetings with miners, making PowerPoint presentations in English and Chinese, and possibly also making secret deals behind the scenes. This is also why Theymos feels the need to censor. And this is perhaps also why your brainwashed supporters from censored forums feel the need to constantly make their juvenile, content-free, drive-by comments (and perhaps also why you evidently feel the need to privately message me your own comments now). Because, once again, for the umpteenth time in years, you've seen that we are not going away. Every day you get another worrisome, painful reminder from us that Classic is still running on 25% of "your" network. And everyday get another worrisome, painful reminder that Classic could easily jump to 75% in a matter of days - as soon as investors see their $7 billion wealth starting to evaporate when the network goes into a congestion crisis due to your obstructionism and insistence on artificially small 1 MB blocks. If your code were good enough to stand on its own, then all of Core's globetrotting and campaigning and censorship would be necessary. But you know, and everyone else knows, that your cripple-code does not include simple and safe scaling - and the competing code (Classic, Unlimited) does. So your code cannot stand on its own - and that's why you and your supporters feel that it's necessary to keep up the censorship and and the lies and the snark. It's shameful that a smart coder like you would be involved with such tactics. Oppressive regimes always last longer than everyone expects - but they also also collapse faster than anyone expects. We already have interesting historical precedents showing how grassroots resistance to centralized oppression and obstructionism tends to work out in the end. The phenomenon is two-fold:
The oppression usually drags on much longer than anyone expects; and
The liberation usually happens quite abruptly - much faster than anyone expects.
The Berlin Wall stayed up much longer than everyone expected - but it also came tumbling down much faster than everyone expected. Examples of opporessive regimes that held on surprisingly long, and collapsed surpisingly fast, are rather common - eg, the collapse of the Berlin Wall, or the collapse of the Soviet Union. (Both examples are actually quite germane to the case of Blockstream/Core/Theymos - as those despotic regimes were also held together by the fragile chewing gum and paper clips of denialism and censorship, and the brainwashed but ultimately complacent and fragile yes-men that inevitably arise in such an environment.) The Berlin Wall did indeed seem like it would never come down. But the grassroots resistance against it was always there, in the wings, chipping away at the oppression, trying to break free. And then when it did come down, it happened in a matter of days - much faster than anyone had expected. That's generally how these things tend to go:
oppression and obstructionism drag on forever, and the people oppressing freedom and progress erroneously believe that Core/Blockstream is "winning" (in this case: Blockstream/Core and you and Adam and Austin - and the clueless yes-men on censored forums like r\bitcoin who mindlessly support you, and the obedient Chinese miners who, thus far, have apparently been to polite to oppose you) ;
then one fine day, the market (or society) mysteriously and abruptly decides one day that "enough is enough" - and the tsunami comes in and washes the oppressors away in the blink of an eye.
So all these non-entities with their drive-by comments on these threads and their premature gloating and triumphalism are irrelevant in the long term. The only thing that really matters is investors and users - who are continually applying grassroots pressure on the network, demanding increased capacity to keep the transactions flowing (and the price rising). And then one day: the Berlin Wall comes tumbling down - or in the case of Bitcoin: a bunch of mining pools have to switch to Classic, and they will do switch so fast it will make your head spin. Because there will be an emergency congestion crisis where the network is causing the price to crash and threatening to destroy $7 billion in investor wealth. So it is understandable that your supports might sometimes prematurely gloat, or you might feel the need to try to comment publicly or privately, or Adam might feel the need to jet around the world. Because a large chunk of people have rejected your code. And because many more can and will - and they'll do in the blink of an eye. Classic is still out there, "waiting in the wings", ready to be installed, whenever the investors tell the miners that it is needed. Fortunately for big-block supporters, in this "election", the polls don't stay open for just one day, like in national elections. The voting for Classic is on-going - it runs for two years. It is happening now, and it will continue to happen until around January 1, 2018 (which is when Classic-as-an-option has been set to officially "expire"). To make a weird comparison with American presidential politics: It's kinda like if either Hillary or Trump were already in office - but meanwhile there was also an ongoing election (where people could change their votes as often as they want), and the day when people got fed up with the incompetent incumbent, they can throw them out (and install someone like Bernie instead) in the blink of an eye. So while the inertia does favor the incumbent (because people are lazy: it takes them a while to become informed, or fed up, or panicked), this kind of long-running, basically never-ending election favors the insurgent (because once the incumbent visibly screws up, the insurgent gets adopted - permanently). Everyone knows that Satoshi explicitly defined Bitcoin to be a voting system, in and of itself. Not only does the network vote on which valid block to append next to the chain - the network also votes on the very definition of what a "valid block" is. Go ahead and re-read the anonymous PDF that was recently posted on the subject of how you are dangerously centralizing Bitcoin by trying to prevent any votes from taking place: https://np.reddit.com/btc/comments/4hxlquhoh_a_warning_regarding_the_onset_of_centralised/ The insurgent (Classic, Unlimited) is right (they maximally use available bandwidth) - while the incumbent (Core) is wrong (it needlessly throws bandwidth out the window, choking the network, suppressing volume, and hurting the price). And you, and Adam, and Austin Hill - and your funders from the Bilderberg Group - must be freaking out that there is no way you can get rid of Classic (due to the open-source nature of cryptocurrency and Bitcoin). Cripple-code will always be rejected by the network. Classic is already running on about 20%-25% of nodes, and there is nothing you can do to stop it - except commenting on these threads, or having guys like Adam flying around the world doing PowerPoints, etc. Everything you do is irrelevant when compared against billions of dollars in current wealth (and possibly trillions more down the road) which needs and wants and will get bigger blocks. You guys no longer even make technical arguments against bigger blocks - because there are none: Classic's codebase is 99% the same as Core, except with bigger blocks. So when we do finally get bigger blocks, we will get them very, very fast: because it only takes a few hours to upgrade the software to keep all the good crypto and networking code that Core/Blockstream wrote - while tossing that single line of 1 MB "max blocksize" cripple-code from Core/Blockstream into the dustbin of history - just like people did with the Berlin Wall.
Great Firewall of China Breached by Bitcoin Blockchain: The iconic image of the bravely defiant Tiananmen Square Tank Man who symbolizes the power of the individual against the might of the State is latest prohibited material permanently embedded on the Bitcoin Blockchain.
Bitcoin Classic hard fork causes chaos on /r/Bitcoin! Luke-Jr complains about "blatant lies from a new altcoin calling itself Bitcoin Classic", reveals his ignorance on 2 basic aspects of Bitcoin governance! Theymos deletes top post by E Vorhees, mod StarMaged undeletes it, Theymos fires StarMaged!
How clueless can Luke-Jr be? He can't seem to grasp the fact that the Bitcoin Classic devs disagree with the Core devs - which is why they're forking a new, independent repo,away from Core. To give users a choice among Bitcoin clients. Devs who want to work on Bitcoin Classic obviously don't need permission from Core. They're totally separate repos. "Decentralized development" and all. But poor Luke-Jr, living in his bubble, with his centralized, top-down, authoritarian worldview, just can't seem to wrap his head around these simple and obvious facts:
Bitcoin Classic doesn't need to submit a BIP to the Core devs.
Bitcoin Classic doesn't don't need to get the consensus of the Core devs.
As a new Bitcoin dev team, Bitcoin Classic can have its own series of BIPs ("BCLIPs"?). And Bitcoin Classic can get consensus among its own devs - and also, among its users - an area where Core / Blockstream devs have been doing a horrible job, because:
Core / Blockstream devs have been ignoring features which users need (scaling); and
Core / Blockstream devs have been forcing features onto users which they don't want (RBF).
By the way, Peter Todd evidently knows way more about Bitcoin governance than Luke-Jr Peter Todd actually understands these basic concepts about Bitcoin governance. Maybe he could give Luke-Jr some remedial coaching to get him up to speed on this complicated stuff?
Peter Todd: If consensus among devs can't be reached, it's certainly more productive if the devs who disagree present themselves as a separate team with different goals; trying to reach consensus within the same team is silly given that the goals of the people involved are so different.
https://np.reddit.com/btc/comments/3xhsel/peter_todd_if_consensus_among_devs_cant_be/ Bitcoin Classic gets off to a strong start; /Bitcoin descends into chaos The new repo Bitcoin Classic has gotten off to a strong start, because it gives miners what they want. Meanwhile, /Bitcoin is starting to descend into chaos over the whole thing. The problem for /Bitcoin is that a repo has finally come along which actually provides some simple, popular and robust short-term and long-term scaling solutions that most stakeholders are in agreement about. Bitcoin Classic didn't stumble upon this by accident. Their team already includes two key members:
jtoomim, a minecoder who's been testing sofware and talking to users on both sides of the Great Firewall of China for several months now, so he can be sure he's giving them what they actually want.
gavinandresen, a highly respected coder who Satoshi originally handed control of the first Bitcoin repo over to (before Blockstream hijacked it). Gavin is well-known for his firm belief that users (not devs) should have control. He has already confirmed that he's going to work on Bitcoin Classic. And he's also stated that his "new favorite max-blocksize scaling propsal" is BitPay's Adaptive Block Size Limit (instead of BIP 101).
BitPay's Adaptive Block Size Limit BitPay's Adaptive Block Size Limit seems to be the first blocksize proposal with good chances for achieving consensus among users, because offers the following advantages: (1) It's simple and easy to understand; (2) It starts off with a tiny bump to 2 MB, which miners are already in consensus about; (2) "It makes it clear that miners are in control, not devs"; (4) It has a robust, responsive roadmap for scaling long-term, with "max blocksize" based on the median of previous actual block sizes (or possibly some other algorithm which the community might decide upon). The key feature of Bitcoin Classic is that it puts users in control - not devs So Bitcoin Classic has gotten off to a great start right out of the gate, due to the involvement of JToomim and Gavin who have been writing code and running tests and - perhaps most importantly - listening to users, to make sure this repo gives them what they want. A lot of what Bitcoin Classic is about isn't so much this or that specific spec. First and foremost, it's about "making it clear that miners are in control, not devs". As you might imagine, this kind of democratic approach is driving /Bitcoin crazy. /Bitcoin doesn't know what to do about Bitcoin Classic After living in their faraway bubble of censorship for the past year, ruled by a tyrant and surrounded by yes-men and trolls, twisting themselves into contortions trying to redefine "altcoins" and "forks" and "consensus", the guys over at /Bitcoin now find themselves totally unable to figure out what to do, now that the Bitcoin user community is finally getting excited about a new repo offering simple and popular scaling solutions. The guys over at /Bitcoin simply have no idea how to handle this, now that "consensus" looks like it might be starting to form around a repo which they don't control. Well, what did they expect? How could consensus ever form on their forum when they don't allow anyone to debate anything over there? Did they think it was just going to magically to drop out of the sky engraved on stone tablets or something? Anyways, here's a summary of some of the chaos happening over at /Bitcoin this past week - first due to Coinbase daring to test the Bitcoin XT repo, and second due to the Bitcoin Classic repo getting announced: /Bitcoin goes into meltdown over CoinBase testing XT
CoinBase states in their blog that they were testing the Bitcoin XT repo (which competes with Core), so that they would be able to continue serving their customers without interruption in case of a fork;
Theymos throws a fit and removes Coinbase from bitcoin.org;
A thread on Core's GitHub repo goes up and get 95% ACKs saying that CoinBase should be un-removed;
Theymos forces Charlie Lee to go through one of those Communist-style "rehabilitations" where he has to sign one of those public "confessions" you used to see political prisoners in dictatorships forced into;
Theymos un-removes Coinbase from bitcoin.org - spewing his usual nonsense and getting massively downvoted as usual;
Finally, a pull-request goes up up on Core's Github repo where they say they're officially distancing themselves from bitcoin.org (and will probably getting their own site).
So over the course of a couple days Theymos has managed to alienate one of the largest licensed Bitcoin financial institutions in the USA, and seems to have caused some kind of split to start forming between Core and /Bitcoin. /Bitcoin goes into meltdown over Bitcoin Classic forking away from Core
SatoshisCat makes a post in /Bitcoin about Bitcoin Classic, it gets hundreds of upvotes, goes to 1st or 2nd place [Note: Title of this OP incorrectly says that "E Vorhees" made that post; the title of this OP should have said that SatoshisCat made that post. Sorry - too late to change the title of this OP now.];
Theymos removes the post because it's "spam" or an "altcoin" or something;
E Vorhees complains in another post, calling it "censhorship";
Luke-Jr weighs in and says they don't "censor", they only "moderate" - and gets massively downvoted;
One of the other mods (StarMaged) at /Bitcoin un-removes the post by E Vorhees that had been previously removed;
Theymos removes StarMaged's moderator privileges;
Theymos decides to leave the post back up - and digs himself deeper into a hole spewing his usual nonsense and getting massive downvotes and criticisms.
At this point, I'm just laughing out loud. How do Luke-Jr and his censor-buddy Theymos always manage to get everything so totally wrong?? We know part of the answer:
They're well-meaning, but very young and inexperienced;
They're smart about some things - but this gives them big egos and a big blind spot, so they're unaware that they're not so smart about everything;
They no longer know what people are thinking and talking about in the real world, because they've isolated themselves in a bubble of censorship and yes-men for the past years (plus lots of trolls who love to frolic at /Bitcoin, knowing they're safe there);
They don't know one of the eternal facts about human psychology and politics: "Power corrupts, and absolute power corrupts absolutely." Did they really think they were going to be an exception?
Evidently they didn't get the memo that most people who are into Bitcoin aren't into bowing down to central authorities.
Maybe someday these kids will grow up and learn about things like politics and economics and history - or things like Nassim Taleb's concept anti-fragility. For the moment, they apparently have no clue about their tyranny has left them fragile and vulnerable, now that they've silenced anyone around them who might open their eyes and challenge their ideas. More about Bitcoin Classic If you want to read more about Bitcoin Classic, here's some posts that might be interesting: https://bitcoinclassic.com/
We are hard forking bitcoin to a 2 MB blocksize limit. Please join us. The data shows consensus amongst miners for an immediate 2 MB increase, and demand amongst users for 8 MB or more. We are writing the software that miners and users say they want. We will make sure that it solves their needs, help them deploy it, and gracefully upgrade the bitcoin network’s capacity together. We call our code repository Bitcoin Classic. It is a one-feature patch to bitcoin-core that increases the blocksize limit to 2 MB. In the future we will continue to release updates that are in line with Satoshi’s whitepaper & vision, and are agreed upon by the community.
I'm working on a project called Bitcoin Classic to bring democracy and Satoshi's original vision back to Bitcoin development.
BitPay's Adaptive Block Size Limit is my favorite proposal. It's easy to explain, makes it easy for the miners to see that they have ultimate control over the size (as they always have), and takes control away from the developers. – Gavin Andresen
"Eppur, se muove." | It's not even about the specifics of the specs. It's about the fact that (for the first time since Blockstream hijacked the "One True Repo"), we can now actually once again specify those specs. It's about Bitcoin Classic.
New Bitcoin Unlimited website was launched today. Come take a look!
29 April 2017: BU Launches New Website
Bitcoin Unlimited is very excited to be launching a new website today. The site specifically targets bitcoin users, node operators, miners and investors, explaining how BU benefits each group. It also highlights the novel new technologies that the BU team has contributed to help the network backbone scale to meet the demands of the global economy, showcasing BU's adjustable block cap feature, xtreme thin blocks, and parallel validation. Secretary and Chief Scientist Peter Rizun explains that "when the block size limit debate broke out, there were three main obstacles: (1) node operators couldn't increase their node's block size limit without recompiling from source code, (2) block propagation--especially across the Great Firewall of China--was the dominant scaling bottleneck, and (3) certain pathological transactions, if included in a block, would cause nodes to freeze for over a minute. These obstacles have been overcome and I think this new website will help to make this clear." In addition to highlighting BU's novel technologies, the new site includes a more comprehensive resource section featuring scaling-related articles written by BU members, along with a list of all Bitcoin Unlimited Improvement Proposals (BUIPs) to date. Lastly the site includes a comprehensive set of FAQs. "My impression was and is that Bitcoin Unlimited is terribly misunderstood," noted one of the anonymous FAQ contributors. "The lack of knowledge and a public place for easy consumable information has been the source of misinformation and made it extremely difficult to fight misinformation. With the FAQ I want to help raise awareness of what BU is really about." The new website wouldn't have been possible without the help and professionalism of website developer Sina Habibian. "It’s been a pleasure working with the Bitcoin Unlimited team in building their new website," Sina remarked. "There are many technical and non-technical resources included on the site that shed light on how BU operates. My hope is that this will inform our public dialogue and focus our attention on the technical and economic merits of BU." Sina ended with a reminder that BU is a community project and a call for more people to get involved: "The open-source code for the site is available on Github. Please contribute if you see room for improvement." Link to website: https://www.bitcoinunlimited.info
Core/Blockstream is living in a fantasy world. In the real world everyone knows (1) our hardware can support 4-8 MB (even with the Great Firewall), and (2) hard forks are cleaner than soft forks. Core/Blockstream refuses to offer either of these things. Other implementations (eg: BU) can offer both.
Core/Blockstream is living in a fantasy world. In the real world everyone knows (1) our hardware can support 4-8 MB (even with the Great Firewall), and (2) hard forks are cleaner than soft forks. Core/Blockstream refuses to offer either of these things. Other implementations (eg: BU) can offer both. It's not even mainly about the blocksize. There's actually several things that need to be upgraded in Bitcoin right now - malleability, quadratic verification time - in addition to the blocksize which could be 4-8 megs right now as everyone has been saying for years. The network is suffering congestion, delays and unpredictable delivery this week - because of 1 MB blocks - which is all Core/Blockstream's fault. Chinese miner Jiang Zhuo'er published a post today where once again we hear that people's hardware and infrastructure would already support 4-8 MB blocks (including the Great Firewall of China) - if only our software could "somehow" be upgraded to suport 4-8 MB blocks. https://np.reddit.com/btc/comments/5eh2cc/why_against_segwit_and_core_jiang_zhuoer_who/ https://np.reddit.com/Bitcoin/comments/5egroc/why_against_segwit_and_core_jiang_zhuoer_who/ Bigger blocks would avoid the congestion we're seeing this week - and would probably also cause a much higher price. The main reason we don't have 4-8 MB blocks right now is Core/Blockstream's fault. (And also, as people are now realizing: it's everyone's fault, for continuing to listen to Core/Blockstream, after all their failures.) Much more complex changes have been rolled out in other coins, with no problems whatsoever. Code on other projects gets upgraded all the time, and Satoshi expected Bitcoin's code to get upgraded too. But Core/Blockstream don't want to upgrade. Coins can upgrade as long as they maintain their "meta-rules" Everyone has a fairly clear intuition of what a coin's "meta-rules" are, and in the case of Bitcoin these include:
21 million coin cap
Note that "1 MB max blocksize" is not a meta-rule of Bitcoin. It was a temporary anti-spam measure, mentioned nowhere in the original descriptions, and it was supposed to be eliminated long ago. Blocksizes have always increased, and people intuitively understand that we should get the most we can out of our hardware and infrastructure - which would support 4-8 MB blocks now, if only some dev team would provide that code. Core/Blockstream, for their own mysterious reasons, refuse to provide that code. But that is their problem - not our problem. It's not rocket science, and we're not dependent on Core/Blockstream Much of the "rocket science" of Bitcoin was already done by Satoshi, and further incremental improvements have been added since. Increasing the blocksize is a relatively simple improvement, and it can be done by many, many other dev teams aside from Core/Blockstream - such as BU, which proposes a novel approach offering configuration settings allowing the market to collaboratively determine the blocksize, evolving over time. We should also recall that BitPay also proposed another solution, based on a robust statistic using the median of previous blocksizes. One important characteristic about both these proposals is that they make the blocksize configurable - ie, you don't need to do additional upgrades later. This is a serious disadvantage of SegWit - which is really rather primitive in its proposed blocksize approach - ie, it once-again proposes some "centrally planned", "hard-coded" numbers. After all the mess of the past few years of debate, "centrally planned hard-coded blocksize numbers" everyone now knows that are ridiculous. But this is what we get from the "experts" at Core/Blockstream. And meanwhile, once again, this week the network is suffering congestion, delays and unpredictable delivery - because Core/Blockstream are too paralyzed and myopic and arrogant to provide the kind of upgrade we've been asking for. Instead, they have wimped out and offered merely a "soft fork" with almost no immediate capacity increase at all - in other words, an insulting and messy hack. This is why Core/Blockstream's SegWit-as-a-spaghetti-code-soft-fork-with-almost-no-immediate-capacity-increase will probably get rejected by the community - because it's too little, too late, and in the wrong package. Engineering isn't the only consideration There are considerations involving economics and politics as well, which any Bitcoin dev team must take into account when deciding how to package and deploy the code improvements they offer to users - and on this level, Core/Blockstream has failed miserably. They have basically ignored the fact that many people are already dependent for their economic livelihood on the $12 billion market cap in the blockchain flowing smoothly. And they also ignored the fact that people don't like to be patronized / condescended to / dictated to. Core/Blockstream did not properly take these considerations into account - so if their current SegWit-as-a-spaghetti-code-soft-fork-with-almost-no-immediate-capacity-increase offering gets rejected, then it's all their fault. Core/Blockstream hates hard forks Core/Blockstream have an extreme aversion to what they pejoratively call "hard forks" (which Bitcoin Unlimited developer Thomas Zander u/ThomasZander correctly pointed out should be called by the neutral terminology "protocol upgrades"). Core/Blockstream seem to be worried - perhaps rightfully so - that any installation of new software on the network would necessarily constitute "full node referendum" which might dislodge Core/Blockstream from their position as "incumbents". But, again, that's their problem, not ours. Bitcoin was always intended to be upgraded by a "full node referendum" - regardless of whether that might unseat any currently "incumbent" dev team which had failed to offer the best code for the network. https://np.reddit.com/btc/search?q=blockstream+hard+fork&restrict_sr=on Insisting on "soft forks" and "small blocks" means that Core/Blockstream's will always be inferior. Core/Blockstream's aversion to "hard forks" (aka "protocol upgrades") will always have horrible consequences for their code quality. Blockstream is required (by law) to serve their investment team, whose lead investors include legacy "fantasy fiat" finance firms such as AXA This means that Blockstream is not required (by law) to serve the Bitcoin community - they might, or they might not. And they might, or might not, even tell us what their actual goals are. Their corporate owners want soft forks (to avoid the possibility of another dev team coming to prominence), and they want small blocks (which they believe will support their proposed off-chain solutions such as LN - which may never even be released, and will probably be centralized if it is ever released). This simply conflicts with the need of the Bitcoin community. Which is the main reason why Blockstream is probably doomed - they are legally required to not serve their investors, not the Bitcoin community. If we're installing new code, we might as well do a hard fork There's around 5,000 - 6,000 nodes on the network. If Core/Blockstream expected 95% of them to upgrade to SegWit-as-a-soft-fork, then with such a high adoption level, they might as well have done it as a much cleaner hard fork anyways. But they didn't - because they don't prioritize our needs, they prioritize the needs of their investors. So instead of offering an upgrade offering the features we wanted (including on-chain scaling), implemented the way we wanted (as a hard fork) - they offered us everything we didn't want: a messy spaghetti-code soft fork, which doesn't even include the features we've been clamoring about for years (and which the congested network actually needs right now, this week). Core/Blockstream has betrayed the early promise of SegWit - losing many of its early supporters, including myself Remember, the main purpose of SegWit was to be a code cleanup / refactoring. And you do not do a code cleanup / refactoring by introducing more spaghetti code just because devs are afraid of "full node referendums" where they might lose "power". Instead, devs should be honest, and actually serve the needs of community, by giving us the features we want, packaged the way we want them. As noted in the link in the section title above, I myself was an outspoken supporter championing SegWit on the day when I first the YouTube of Pieter Wuille explaining it at one of the early "Scaling Bitcoin" conferences. Then I found out that doing it as a soft fork would add unnecessary "spaghetti code" - and I became one of the most outspoken opponents of SegWit. By the way, it must have been especially humiliating for a talented programmer Pieter Wuille like to have to contort SegWit into the "spaghetti-code soft fork" proposed by a mediocre programmer like Luke-Jr. Another tragic Bitcoin farce brought to you by Blockstream - maybe someday we'll get to hear all the juicy, dreary details. Dev teams that don't listen to their users... get fired We told Core/Blockstream time and time again that we're not against SegWit or LN per se - we simply also want to:
make maximum use of our hardware and infrastructure, which would currently support 4 or 8 MB blocks - not the artificial scarcity imposed by Core/Blockstream's code with its measly 1 MB blocks.
keep the code clean - don't offer us "spaghetti code" just because you think you can can trick us into never "voting" so you can reign as "incumbents forever".
This was expressed again, most emphatically, at the Hong Kong meeting, where some Core/Blockstream-associated devs seemed to make some commitments to give users what we wanted. But later they dishonored those commitments anyways, and used fuzzy language to deny that they had ever even made them - further losing the confidence of the users. Any dev team has to earn the support of the users, and Core/Blockstream (despite all their financial backing, despite having recruited such a large number of devs, despite having inherited the original code base) is steadily losing that support - because they have not given people what we asked for, and they have not compromised one inch on very simple issues - and to top it off, they have been dishonest. They have also tried to dictate to the users - and users don't like this. Some users might not know coding - but others do. One example is ViaBTC - who is running a very big mining pool, with a very fast relay network, and also offering cloud mining - and emphatically rejecting the crippled code from Core/Blockstream. Instead of running Core/Blockstream's inferior crippled code, ViaBTC runs Bitcoin Unlimited. This was all avoidable Just think for a minute how easy it would have been for Core/Blockstream to package their offering more attractively - by including 4 MB blocks for example, and by doing SegWit as a hard fork. Totally doable - and it would have kept everyone happy - avoiding congestion on the network for several more years, while also paving the way for their dreams of LN - and also leaving Core/Blockstream "in power". But instead, Core/Blockstream stupidly and arrogantly refused to listen or cooperate or compromise with the users. And now the network is congested, and it is unclear whether users will adopt Core/Blockstream's too-little too-late offering of SegWit-as-a-spaghetti-code-soft-fork-with-almost-no-immediate-capacity-increase. So the current problems are all Core/Blockstream's fault - but also everyone's fault, for continuing to listen to Core/Blockstream. The best solution now is to reject Core/Blockstream's inferior roadmap, and consider a roadmap from some other dev team (such as BU).
According to the WSJ, regulators have decided on a "comprehensive ban on channels for the buying or selling of the virtual currency in China"
Here is the link to the original WSJ article. Full Article: (Thanks knight222) BEIJING—Chinese authorities are moving toward a broad clampdown on bitcoin trading, testing the resilience of the virtual currency as well as the idea its decentralized nature protects it from government interference. Regulators have decided on a comprehensive ban on channels for the buying or selling of the virtual currency in China that goes beyond plans to shut commercial bitcoin exchanges, according to people familiar with the matter. Officials communicated the message to several industry executives at a closed-door meeting in Beijing on Friday, according to people who were at the meeting. Until last week, many entrepreneurs in China’s bitcoin circles had thought authorities might shut down only commercial trading activity while tolerating peer-to-peer, or over-the-counter, bitcoin platforms, which enable buyers and sellers to find each other and trade directly. The Chinese plan represents some of the most draconian measures any government has taken to control bitcoin, created by an anonymous programmer nearly a decade ago as an alternative to official currencies, and word of it sent another wave of anxiety through the Chinese bitcoin community. China has digitized its financial sector faster than any other nation. Authorities continue to support the trend, though their public comments also suggest concern bitcoin could weaken official control of the country’s money supply. The crackdown on the bitcoin ecosystem represents Beijing’s possibly biggest effort so far to limit expansion of a system to rival the yuan. In a previous crackdown, in 2009, the central bank banned the use of tokens valued at billions of dollars created in China’s massive online-gaming networks for real-world purchases. Bit of Uncertainty China’s clampdown on bitcoin has hurt global prices and domestic trading volumes; for now the country remains a major center for bitcoin mining. [picture] Mining in 2017* Share by country Daily trading Share by currency Bitcoin price $5,000 100 % Others Others China Yen 35.3% 64.7% Euro 4,000 75 3,000 50 Dollar 2,000 25 Yuan 1,000 0 A J S ’17 2016 *As of Aug. 31 Sources: coindesk (price); bitcoinity.org (trading volume, mining) A quasiregulatory body called the National Internet Finance Association of China (NIFA) warned investors about virtual currency trading in a statement last week and said that bitcoin platforms lack “legal basis” to operate in the country. A goal of China’s monetary regulation is to ensure that “the source and destination of every piece of money can be tracked,” Li Lihui, a NIFA official told a technology conference in Shanghai on Friday. A lack of clarity from regulators has fueled worries about how far the government will go. One uncertainty, for example, is whether the ban will affect bitcoin deals made over social-messaging apps such as WeChat . People in the industry say a wave of bitcoin users in recent days migrated from WeChat to the encrypted messaging service Telegram. A broader clampdown will likely include blocking mainland access to websites of foreign bitcoin exchanges such as Coinbase in the U.S. and Bitfinex in Hong Kong, say people familiar with the matter. Last weekend, the largest domestic bitcoin exchanges—BTCC, Huobi and OKCoin—all said they would halt trading services in the coming weeks, sending prices of bitcoin on the global market tumbling. Bitcoin traded at $3,947 apiece on Monday evening in Beijing, roughly 26% off its high of $4,960.72 on Sept. 1. Industry advocates hail bitcoin for allowing users to transact with each other without the involvement of a central authority. In reality, users access the market for virtual currencies via services and businesses that are centralized in real locations and therefore are susceptible to third parties. Any attempt by China to interfere broadly in the bitcoin network would test that notion further. On the flip side, if bitcoin does prove resilient, China could be shutting itself out of a growing global market. As recently as last year, China accounted for the bulk of global bitcoin trading activity, but its share has dropped dramatically since the government started attempting to cool the market. China now accounts for less than 15% of bitcoin trading volume. Blocking overseas exchange sites would add them to a long list of websites Beijing considers too sensitive, including Google and Facebook. Chinese authorities haven’t made public their stance on virtual currency trading. The People’s Bank of China and the Ministry of Internet and Information Technology didn’t respond to requests for comment on bitcoin measures. A document passed around at Friday’s meeting and reviewed by The Wall Street Journal instructs Beijing-based exchanges to unwind their operations and provide information on bank accounts used for clients’ deposits by Wednesday. While China’s sway in bitcoin trading volumes has faded, the country remains a major creator of new bitcoin through a process called mining. Chinese bitcoin miners operate a vast collection of computers for the purpose in remote areas like northwestern Xinjiang, where they can access electricity for cheap. Until now, Chinese miners considered themselves immune from Beijing’s evolving stance on bitcoin trading. One entrepreneur said miners are now worried about authorities moving to limit their operations. “Using VPNs as a workaround will be difficult,” he said, referring to virtual private networks that allow users to circumvent China’s so-called Great Firewall. Chinese miners loom large in the global bitcoin mining network, also serving an important role in the upkeep of the bitcoin ledger. Potential interference in how they connect to and use the internet could disrupt, at least temporarily, both the creation of new bitcoin and the speed at which global bitcoin transactions are confirmed, say people in the industry. The stepped-up tightening by regulators comes as China’s top leaders have been vocal about battling money laundering, in advance of an important leadership transition this fall. Last week, China’s State Council released guidelines aimed at better coordination between regulators to address the transfer of capital for illicit purposes. —James T. Areddy in Shanghai and Liyan Qi in Beijing contributed to this article.
The /r/btc China Dispatch: Episode 3 - Block Size, Chinese Miners and The Great Firewall
Good Sunday morning, /btc! The question of why Chinese miners don’t use a node outside of China to route around the Great Firewall of China (hereafter abbreviated as “the GFW”) and relay blocks more efficiently, a question with profound implications for any future block size proposal, has come up more than once over the last couple of days here, so for this episode I personally submitted the above question to one of China’s largest and most active bitcoin forums, 8btc.com and got some interesting responses that might surprise you. For those of you who missed the last two episodes, you can catch up here and here. Also by popular request I will see if I can submit translations of the most upvoted comments here back to 8btc.com so we can establish an ongoing dialogue between both sides of the GFW. [OP] Posted by KoKansei Subject: If Chinese miners are concerned that the GFW will affect their ability to process big blocks, why don’t they set up a node outside of China? My question concerns the subject above. No doubt Chinese bitcoiners are well-aware that an irreconcilable schism has occurred in the Bitcoin development sphere and this split has shaken many users’ confidence in the currency. As a result a majority of miners, including those in China, have expressed support for the Bitcoin Classic client, which will increase the upper block size limit. However, although many miners within China support classic, they have also expressed concerns about further increases in the block size going forward since the GFW may limit the bandwidth of their connection with nodes outside of China, thereby resulting in losses to their mining business. As a mod of /btc (one of the largest uncensored forums outside of China) I would like to pose a question to the esteemed regulars of this board: if Chinese miners are concerned that the GFW will affect their ability to process large blocks, why don’t they set up nodes outside of China? If this thread gets a fair number of responses I will repost your thoughts to /btc to promote an exchange of ideas between our two bitcoin communities. Thank you! [Reply 1] Posted by LaibitePool (LTC1BTC.com) I would like to respond briefly as the manager of a mining pool.
A new block can only be broadcast outward by a single node and two blocks which are produced simultaneously by two different nodes cannot be broadcast at the same time.
For every second that a broadcasted block is delayed, there is a 1/600 chance that the network will produce a new block, so the risk of the block being orphaned increases by 1/600.
Currently the majority of hashing power is concentrated in China and the state of China’s Internet within China is quite good so the nodes from which China’s pools initially broadcast are located in China.
An initial broadcast to foreign nodes must get over the GFW. Currently all large mining pools have already established nodes outside of China, but they’re only there to speed up the whole process and do not allow circumventing of the GFW.
Supplementary Edit: It is not at all uncommon for Internet traffic going across national borders to be relatively slow, so the issue can’t be entirely blamed on just the GFW. Speeds are largely affected by a country / region’s total international bandwidth limits as well as related network topology. For example, we tested transmission from Shenzhen to Hong Kong and found that when you use suitable data centers the ping back and forth is less than 10 ms, but when you try and transmit a block from Hong Kong to the US or Europe (note that the GFW is not an obstacle here!) transmission is much slower than within China. I’m not sure who first proposed the notion that “block transmission is affected by the GFW,” but I don’t think this notion is really accurate. Putting it like that gives people the impression that bitcoin has already been subjugated by some kind of evil organization, producing negative effects as well as conflict and division within the community. It is more accurate to say: the transmission of blocks is limited by China’s outgoing international bandwidth availability which has always been poor. This is mostly because China’s domestic Internet is already sufficiently vast and the needs of the vast majority of users can be satisfied domestically. This is different that the US and Europe where almost all services involve transmission across national borders. If you’re interested in more details regarding China’s outgoing international bandwidth, you can take a look at a few reports, like “How Embarrassing! China’s Per Capita International Trunk Line Bandwidth is Only Half of Africa’s!” [Reply 2] Posted by KoKansei Thanks a lot for taking time to post such a detailed response. If I may, I’d like to ask two more questions: (1) Given the current situation with the GFW, what do you think is the highest block size that Chinese miners are capable of dealing with? Is it possible to estimate such a number? (2) My understanding is that the most important part of a new block is the header. Were a Chinese miner to establish a node outside of China then it should be possible for them to send just the header of any new blocks across the GFW to said node, where the block can be broadcast. Using this method, should solve the issue of having to transmit a whole block across the GFW. Are there currently any miners who are using or considering using this method? Thanks again for all your insight! [Reply 3] Posted by Ma_Ya I would like to respond briefly as a dedicated bitcoiner.
I don’t think that the developer sphere has necessarily undergone a schism, it’s just that now there exists a new competing version. Even in the event of a schism it is still possible to restore consensus. The only people who have had their confidence shaken are a minority of bitcoin speculators and traders; the confidence of the majority of bitcoin fans / faithful will not be shaken simply because of a split among the developers. A split is nothing - even if all of the developers were to disappear bitcoin could continue to function. The core framework of bitcoin was already completed during Satoshi’s time and all that’s left now is a bit of tweaking and adjustment.
Even if the GFW were to limit bandwidth, the miners’ business would not suffer - on the contrary it is the Western mining pools who would suffer losses. You have to realize that more than 50% of bitcoin’s hashing power is located in China, which is to say that the majority of new blocks are created in China. Once such a new block is created, it is first received by the nodes of other pools within China, after which it slowly makes its way over the GFW to the nodes of Western pools. That is to say the foreign pools are slower to receive blocks due to the GFW, which is actually beneficial to Chinese pools. Furthermore, it’s really not a big deal to transmit one or two MB worth of data in a 10 minute interval.
It is feasible to set up a node outside of China, but would you be able to take all of your miners with you outside China? Furthermore, miners need to be associated with a mining pool and there are not that many mining pools outside of China, so you’d just end up having to connect to a Chinese pool anyway. You’d still need to send data to and from China, so getting over the GFW is still an issue. Actually this is all just theoretical; in reality bitcoin has not been blocked by the GFW and due to bitcoin’s decentralized nature it would be difficult to block bitcoin. You worry too much, OP.
[Reply 4] Posted by LaibitePool (LTC1BTC.com)
I just added some content to my other reply. Given China’s current international bandwidth limitations, I think that 4MB is a reasonable value. Actually I support first going to 2MB since 2MB is enough for now. Future lifting of the cap can be done when it’s necessary.
China’s pools, under the guidance of F2Pool, have already employed the method you’re talking about. I suggest that Western pools also participate (as far as I know, there exist similar alliances in the West). Ideally Bitcoin Core should be upgraded to directly incorporate this functionality so that all pools can act as an interconnected subnetwork, solving the orphan problem. Once a block is released, each pool broadcasts to all standard nodes, thereby increasing the speed with which blocks propagate throughout the network.
[Reply 5] Posted by Ma_Ya I too would like to respond briefly to both of your questions. (1) In theory they should be able to easily deal with sizes as large as 100MB. Blocks of this size could be transmitted in minutes with even a standard home connection and this time is significantly reduced for miners who maintain specialized high-speed connections. Ultimately there is no firewall blocking transmission between pools in China and in any case the sum of China’s hashing power is already over 51%. (2) If you understood the principles of mining and what I said before, you wouldn’t ask this question. First of all, China’s mining pools are not in any rush to broadcast the nonce of a successfully mined block to nodes across the globe. It only needs to be received by several of the larger pools in China. This is because once it is received by several large pools in China, you’ve already reached more than half [of available hashing power], which is the same as achieving global consensus. When you look at it like this, Chinese miners should actually want there to be interference from the GFW to hinder Western pools. Also, you mentioned setting up a node outside of China and reconstituting [blocks] there, but in reality you wouldn’t save much time that way. Think about it: what is the big difference between transmitting 1MB or 2MB and a few KB? It's probably around nothing more than one second. 10 minutes and 1 second - that’s a factor of 600:1 which is trivial when you take into account the randomness of mining itself. Furthermore your proposition is only advantageous for Western pools and provides no benefit to Chinese pools. [Reply 6] Posted by hzq0760 It's not at all surprising that there is some controversy on this subject. The fact that one country has more than 50% of the hashing power and also [translator's note: the sentence cuts off abruptly here with four dashes. Possible auto-censor?]. It's definitely a problem. China's mining pools should do something to resolve this issue. Note that some posts in this thread were omitted from the translation due to time constraints.
One very interesting aspect of this is the Great Firewall of China, which observes and filters all Chinese internet traffic. Even without any sort of attack from China, the latency caused by the Great Firewall impacts the entire Bitcoin network, since 74% of all Bitcoin mining hash power is located in China. Despite China banning fiat to Bitcoin trading, which caused the market share of ... During his presentation at Scaling Bitcoin Montreal, Todd explained how lousy block propagation becomes more problematic when the Great Firewall of China is factored into the equation. During his presentation at Scaling Bitcoin Montreal, Todd explained how lousy block propagation becomes more problematic when the Great Firewall of China is factored into the equation. Due to the way the Great Firewall works, miners in China often find out about new blocks before miners in other countries (especially across the world in the United States). The Great Barrier Reef is one of the seven wonders of the natural world, larger than the Great Wall of China and the only living thing on earth visible from space. The reef itself is found between 15 km and 150 km offshore and around 65 km wide in some parts. It is a gathering of brilliant, vivid coral providing divers with spectacular underwater experiences. Bitcoin Miner Manufacturer Canaan reports $148M USD Losses in its first public earnings report. The Only Correlation Between Bitcoin and the Stock Market is Panic Selling. All Bitcoin basics Buy/Spend Physical coins Storage. Ethereum. Preventing Another DAO incident. The United Nations is on a slow path to blockchain adoption . South Korean Government Struggles to Find Balance in Legitimizing ...
Inside a Bitcoin mine that earns $70K a day - Duration: 5 ... The Mobile Cryptocurrency Mining Container by NordCoin - Duration: 4:02. NordCoin Mining 3,961 views. 4:02. 1177 BC: The Year ... This is Bitcoin Mining Software 2020 For PC 🔥 FREE 1.7 BTC In 30 Minutes With Your PC 🔥 ! I hope you enjoy! Have fun and enjoy! I hope you enjoy! Have fun and enjoy! Take a look inside of my Lifestyle Galaxy dashboard to check my profits from my bitcoin mining contract. Monthly updates. Want to start mining your own bitco... Inside a Secret Chinese Bitcoin Mine - Duration: 9:17. ... Bitcoin Unlimited to Send 'Thin Blocks' Thru Great Firewall of China - Duration: 4:48. Amanda B. Johnson 3,239 views. 4:48. Language ... China has recently announced that they are banning bitcoin mining. This was a big announcement in the crypto-world and many Chinese bitcoin mining companies have since left China.