Getting Started — Bitcoin

GAIAcoin

GAIA coin is an alternative cryptocurrency based on Bitcoin. GAIA coin offers the security and reliability of the blockchain with an Extensible, Skinnable, Modular platform design. Wallet users can buy and sell items inside the built-in store instantly with Gaia currency. [url=http://www.gaiaplatform.com]read more[/url]
[link]

Zero

Your Transactions Are Your Business.
[link]

Popular streamer Ice Poseidon creates a flash game and lead developer hides bitcoin mining script

Popular streamer Ice Poseidon creates a flash game and lead developer hides bitcoin mining script submitted by rustlecrowe to Bitcoin [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Popular streamer Ice Poseidon creates a flash game and lead developer hides bitcoin mining script

This is the best tl;dr I could make, original reduced by 78%. (I'm a bot)
Paul "Ice Poseidon" Denino is one of the more notorious "Real-life" streamers, whose videos largely consist of walking around, yelling, and being intentionally outrageous.
Denino's been hit with criticism for something different: hiding a cryptocurrency miner in a piece of software, CxStocks, that a bunch of his fans accessed through a website.
The hard part of CxStocks would be kickstarting an economy, and when the beta version of CxStocks went live yesterday, people discovered a surprising solution: a cryptocurrency miner that would run in the background, while you were using CxStocks.
Two, given how many young fans Denino has, there's reason to believe they wouldn't be aware of the risks of crypto mining.
"Our developer, who released the mining software without consulting Ice, has been let go," said a spokesperson for Denino, "And is no longer associated with the company."
Denino claims he "Didnt [sic] know about the mining before it was released," and wasn't closely following every aspect of CxStocks' development because "There's a certain trust a developer and the person leading it should have." Denino is currently looking for another developer, who will be asked to finish the project "Without the mining or sketchy shit."
Summary Source | FAQ | Feedback | Top keywords: Denino#1 mine#2 CxStocks#3 people#4 Andries#5
Post found in /Bitcoin, /Ice_Poseidon and /BitcoinAll.
NOTICE: This thread is for discussing the submission topic. Please do not discuss the concept of the autotldr bot here.
submitted by autotldr to autotldr [link] [comments]

Popular streamer Ice Poseidon creates a flash game and lead developer hides bitcoin mining script

Popular streamer Ice Poseidon creates a flash game and lead developer hides bitcoin mining script submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments) /r/btc

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments) /btc submitted by ABitcoinAllBot to BitcoinAll [link] [comments]

BFGMiner Development Team Releases Version 4.0 of Leading Bitcoin Mining Software

BFGMiner Development Team Releases Version 4.0 of Leading Bitcoin Mining Software submitted by BTCNews to BTCNews [link] [comments]

BFGMiner Development Team Releases Version 4.0 of Leading Bitcoin Mining Software

BFGMiner Development Team Releases Version 4.0 of Leading Bitcoin Mining Software submitted by cryptocurrencylive to CryptoCurrencyLive [link] [comments]

Why Osana takes so long? (Programmer's point of view on current situation)

I decided to write a comment about «Why Osana takes so long?» somewhere and what can be done to shorten this time. It turned into a long essay. Here's TL;DR of it:
The cost of never paying down this technical debt is clear; eventually the cost to deliver functionality will become so slow that it is easy for a well-designed competitive software product to overtake the badly-designed software in terms of features. In my experience, badly designed software can also lead to a more stressed engineering workforce, in turn leading higher staff churn (which in turn affects costs and productivity when delivering features). Additionally, due to the complexity in a given codebase, the ability to accurately estimate work will also disappear.
Junade Ali, Mastering PHP Design Patterns (2016)
Longer version: I am not sure if people here wanted an explanation from a real developer who works with C and with relatively large projects, but I am going to do it nonetheless. I am not much interested in Yandere Simulator nor in this genre in general, but this particular development has a lot to learn from for any fellow programmers and software engineers to ensure that they'll never end up in Alex's situation, especially considering that he is definitely not the first one to got himself knee-deep in the development hell (do you remember Star Citizen?) and he is definitely not the last one.
On the one hand, people see that Alex works incredibly slowly, equivalent of, like, one hour per day, comparing it with, say, Papers, Please, the game that was developed in nine months from start to finish by one guy. On the other hand, Alex himself most likely thinks that he works until complete exhaustion each day. In fact, I highly suspect that both those sentences are correct! Because of the mistakes made during early development stages, which are highly unlikely to be fixed due to the pressure put on the developer right now and due to his overall approach to coding, cost to add any relatively large feature (e.g. Osana) can be pretty much comparable to the cost of creating a fan game from start to finish. Trust me, I've seen his leaked source code (don't tell anybody about that) and I know what I am talking about. The largest problem in Yandere Simulator right now is its super slow development. So, without further ado, let's talk about how «implementing the low hanging fruit» crippled the development and, more importantly, what would have been an ideal course of action from my point of view to get out. I'll try to explain things in the easiest terms possible.
  1. else if's and lack any sort of refactoring in general
The most «memey» one. I won't talk about the performance though (switch statement is not better in terms of performance, it is a myth. If compiler detects some code that can be turned into a jump table, for example, it will do it, no matter if it is a chain of if's or a switch statement. Compilers nowadays are way smarter than one might think). Just take a look here. I know that it's his older JavaScript code, but, believe it or not, this piece is still present in C# version relatively untouched.
I refactored this code for you using C language (mixed with C++ since there's no this pointer in pure C). Take a note that else if's are still there, else if's are not the problem by itself.
The refactored code is just objectively better for one simple reason: it is shorter, while not being obscure, and now it should be able to handle, say, Trespassing and Blood case without any input from the developer due to the usage of flags. Basically, the shorter your code, the more you can see on screen without spreading your attention too much. As a rule of thumb, the less lines there are, the easier it is for you to work with the code. Just don't overkill that, unless you are going to participate in International Obfuscated C Code Contest. Let me reiterate:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
This is why refactoring — activity of rewriting your old code so it does the same thing, but does it quicker, in a more generic way, in less lines or simpler — is so powerful. In my experience, you can only keep one module/class/whatever in your brain if it does not exceed ~1000 lines, maybe ~1500. Splitting 17000-line-long class into smaller classes probably won't improve performance at all, but it will make working with parts of this class way easier.
Is it too late now to start refactoring? Of course NO: better late than never.
  1. Comments
If you think that you wrote this code, so you'll always easily remember it, I have some bad news for you: you won't. In my experience, one week and that's it. That's why comments are so crucial. It is not necessary to put a ton of comments everywhere, but just a general idea will help you out in the future. Even if you think that It Just Works™ and you'll never ever need to fix it. Time spent to write and debug one line of code almost always exceeds time to write one comment in large-scale projects. Moreover, the best code is the code that is self-evident. In the example above, what the hell does (float) 6 mean? Why not wrap it around into the constant with a good, self-descriptive name? Again, it won't affect performance, since C# compiler is smart enough to silently remove this constant from the real code and place its value into the method invocation directly. Such constants are here for you.
I rewrote my code above a little bit to illustrate this. With those comments, you don't have to remember your code at all, since its functionality is outlined in two tiny lines of comments above it. Moreover, even a person with zero knowledge in programming will figure out the purpose of this code. It took me less than half a minute to write those comments, but it'll probably save me quite a lot of time of figuring out «what was I thinking back then» one day.
Is it too late now to start adding comments? Again, of course NO. Don't be lazy and redirect all your typing from «debunk» page (which pretty much does the opposite of debunking, but who am I to judge you here?) into some useful comments.
  1. Unit testing
This is often neglected, but consider the following. You wrote some code, you ran your game, you saw a new bug. Was it introduced right now? Is it a problem in your older code which has shown up just because you have never actually used it until now? Where should you search for it? You have no idea, and you have one painful debugging session ahead. Just imagine how easier it would be if you've had some routines which automatically execute after each build and check that environment is still sane and nothing broke on a fundamental level. This is called unit testing, and yes, unit tests won't be able to catch all your bugs, but even getting 20% of bugs identified at the earlier stage is a huge boon to development speed.
Is it too late now to start adding unit tests? Kinda YES and NO at the same time. Unit testing works best if it covers the majority of project's code. On the other side, a journey of a thousand miles begins with a single step. If you decide to start refactoring your code, writing a unit test before refactoring will help you to prove to yourself that you have not broken anything without the need of running the game at all.
  1. Static code analysis
This is basically pretty self-explanatory. You set this thing once, you forget about it. Static code analyzer is another «free estate» to speed up the development process by finding tiny little errors, mostly silly typos (do you think that you are good enough in finding them? Well, good luck catching x << 4; in place of x <<= 4; buried deep in C code by eye!). Again, this is not a silver bullet, it is another tool which will help you out with debugging a little bit along with the debugger, unit tests and other things. You need every little bit of help here.
Is it too late now to hook up static code analyzer? Obviously NO.
  1. Code architecture
Say, you want to build Osana, but then you decided to implement some feature, e.g. Snap Mode. By doing this you have maybe made your game a little bit better, but what you have just essentially done is complicated your life, because now you should also write Osana code for Snap Mode. The way game architecture is done right now, easter eggs code is deeply interleaved with game logic, which leads to code «spaghettifying», which in turn slows down the addition of new features, because one has to consider how this feature would work alongside each and every old feature and easter egg. Even if it is just gazing over one line per easter egg, it adds up to the mess, slowly but surely.
A lot of people mention that developer should have been doing it in object-oritented way. However, there is no silver bullet in programming. It does not matter that much if you are doing it object-oriented way or usual procedural way; you can theoretically write, say, AI routines on functional (e.g. LISP)) or even logical language if you are brave enough (e.g. Prolog). You can even invent your own tiny programming language! The only thing that matters is code quality and avoiding the so-called shotgun surgery situation, which plagues Yandere Simulator from top to bottom right now. Is there a way of adding a new feature without interfering with your older code (e.g. by creating a child class which will encapsulate all the things you need, for example)? Go for it, this feature is basically «free» for you. Otherwise you'd better think twice before doing this, because you are going into the «technical debt» territory, borrowing your time from the future by saying «I'll maybe optimize it later» and «a thousand more lines probably won't slow me down in the future that much, right?». Technical debt will incur interest on its own that you'll have to pay. Basically, the entire situation around Osana right now is just a huge tale about how just «interest» incurred by technical debt can control the entire project, like the tail wiggling the dog.
I won't elaborate here further, since it'll take me an even larger post to fully describe what's wrong about Yandere Simulator's code architecture.
Is it too late to rebuild code architecture? Sadly, YES, although it should be possible to split Student class into descendants by using hooks for individual students. However, code architecture can be improved by a vast margin if you start removing easter eggs and features like Snap Mode that currently bloat Yandere Simulator. I know it is going to be painful, but it is the only way to improve code quality here and now. This will simplify the code, and this will make it easier for you to add the «real» features, like Osana or whatever you'd like to accomplish. If you'll ever want them back, you can track them down in Git history and re-implement them one by one, hopefully without performing the shotgun surgery this time.
  1. Loading times
Again, I won't be talking about the performance, since you can debug your game on 20 FPS as well as on 60 FPS, but this is a very different story. Yandere Simulator is huge. Once you fixed a bug, you want to test it, right? And your workflow right now probably looks like this:
  1. Fix the code (unavoidable time loss)
  2. Rebuild the project (can take a loooong time)
  3. Load your game (can take a loooong time)
  4. Test it (unavoidable time loss, unless another bug has popped up via unit testing, code analyzer etc.)
And you can fix it. For instance, I know that Yandere Simulator makes all the students' photos during loading. Why should that be done there? Why not either move it to project building stage by adding build hook so Unity does that for you during full project rebuild, or, even better, why not disable it completely or replace with «PLACEHOLDER» text for debug builds? Each second spent watching the loading screen will be rightfully interpreted as «son is not coding» by the community.
Is it too late to reduce loading times? Hell NO.
  1. Jenkins
Or any other continuous integration tool. «Rebuild a project» can take a long time too, and what can we do about that? Let me give you an idea. Buy a new PC. Get a 32-core Threadripper, 32 GB of fastest RAM you can afford and a cool motherboard which would support all of that (of course, Ryzen/i5/Celeron/i386/Raspberry Pi is fine too, but the faster, the better). The rest is not necessary, e.g. a barely functional second hand video card burned out by bitcoin mining is fine. You set up another PC in your room. You connect it to your network. You set up ramdisk to speed things up even more. You properly set up Jenkins) on this PC. From now on, Jenkins cares about the rest: tracking your Git repository, (re)building process, large and time-consuming unit tests, invoking static code analyzer, profiling, generating reports and whatever else you can and want to hook up. More importantly, you can fix another bug while Jenkins is rebuilding the project for the previous one et cetera.
In general, continuous integration is a great technology to quickly track down errors that were introduced in previous versions, attempting to avoid those kinds of bug hunting sessions. I am highly unsure if continuous integration is needed for 10000-20000 source lines long projects, but things can be different as soon as we step into the 100k+ territory, and Yandere Simulator by now has approximately 150k+ source lines of code. I think that probably continuous integration might be well worth it for Yandere Simulator.
Is it too late to add continuous integration? NO, albeit it is going to take some time and skills to set up.
  1. Stop caring about the criticism
Stop comparing Alex to Scott Cawton. IMO Alex is very similar to the person known as SgtMarkIV, the developer of Brutal Doom, who is also a notorious edgelord who, for example, also once told somebody to kill himself, just like… However, being a horrible person, SgtMarkIV does his job. He simply does not care much about public opinion. That's the difference.
  1. Go outside
Enough said. Your brain works slower if you only think about games and if you can't provide it with enough oxygen supply. I know that this one is probably the hardest to implement, but…
That's all, folks.
Bonus: Do you think how short this list would have been if someone just simply listened to Mike Zaimont instead of breaking down in tears?
submitted by Dezhitse to Osana [link] [comments]

Eth 2.0 vs Polkadot and other musings by a fundamental investor

Spent about two hours on this post and I decided it would help the community if I made it more visible. Comment was made as a response to this
I’m trying to avoid falling into a maximalist mindset over time. This isn’t a 100% ETH question, but I’m trying to stay educated about emerging tech.
Can someone help me see the downsides of diversifying into DOTs?
I know Polkadot is more centralized, VC backed, and generally against our ethos here. On chain governance might introduce some unknown risks. What else am I missing?
I see a bunch of posts about how Ethereum and Polkadot can thrive together, but are they not both L1 competitors?
Response:
What else am I missing?
The upsides.
Most of the guys responding to you here are full Eth maxis who drank the Parity is bad koolaid. They are married to their investment and basically emotional / tribal in an area where you should have a cool head. Sure, you might get more upvotes on Reddit if you do and say what the crowd wants, but do you want upvotes and fleeting validation or do you want returns on your investment? Do you want to be these guys or do you want to be the shareholder making bank off of those guys?
Disclaimer: I'm both an Eth whale and a Dot whale, and have been in crypto for close to a decade now. I originally bought ether sub $10 after researching it for at least a thousand hours. Rode to $1500 and down to $60. Iron hands - my intent has always been to reconsider my Eth position after proof of stake is out. I invested in the 2017 Dot public sale with the plan of flipping profits back to Eth but keeping Dots looks like the right short and long term play now. I am not a trader, I just take a deep tech dive every couple of years and invest in fundamentals.
Now as for your concerns:
I know Polkadot is more centralized
The sad truth is that the market doesn't really care about this. At all. There is no real statistic to show at what point a coin is "decentralized" or "too centralized". For example, bitcoin has been completely taken over by Chinese mining farms for about five years now. Last I checked, they control above 85% of the hashing power, they just spread it among different mining pools to make it look decentralized. They have had the ability to fake or block transactions for all this time but it has never been in their best interest to do so: messing with bitcoin in that way would crash its price, therefore their bitcoin holdings, their mining equipment, and their company stock (some of them worth billions) would evaporate. So they won't do it due to economics, but not because they can't.
That is the major point I want to get across; originally Bitcoin couldn't be messed with because it was decentralized, but now Bitcoin is centralized but it's still not messed with due to economics. It is basically ChinaCoin at this point, but the market doesn't care, and it still enjoys over 50% of the total crypto market cap.
So how does this relate to Polkadot? Well fortunately most chains - Ethereum included - are working towards proof of stake. This is obviously better for the environment, but it also has a massive benefit for token holders. If a hostile party wanted to take over a proof of stake chain they'd have to buy up a massive share of the network. The moment they force through a malicious transaction a proof of stake blockchain has the option to fork them off. It would be messy for a few days, but by the end of the week the hostile party would have a large amount of now worthless tokens, and the proof of stake community would have moved on to a version of the blockchain where the hostile party's tokens have been slashed to zero. So not only does the market not care about centralization (Bitcoin example), but proof of stake makes token holders even safer.
That being said, Polkadot's "centralization" is not that far off to Ethereum. The Web3 foundation kept 30% of the Dots while the Ethereum Foundation kept 17%. There are whales in Polkadot but Ethereum has them too - 40% of all genesis Ether went to 100 wallets, and many suspect that the original Ethereum ICO was sybiled to make it look more popular and decentralized than it really was. But you don't really care about that do you? Neither do I. Whales are a fact of life.
VC backed
VCs are part of the crypto game now. There is no way to get rid of them, and there is no real reason why you should want to get rid of them. They put their capital at risk (same as you and me) and seek returns on their investment (same as you and me). They are both in Polkadot and Ethereum, and have been for years now. I have no issue with them as long as they don't play around with insider information, but that is another topic. To be honest, I would be worried if VCs did not endorse chains I'm researching, but maybe that's because my investing style isn't chasing hype and buying SUSHI style tokens from anonymous (at the time) developers. That's just playing hot potato. But hey, some people are good at that.
As to the amount of wallets that participated in the Polkadot ICO: a little known fact is that more individual wallets participated in Polkadot's ICO than Ethereum's, even though Polkadot never marketed their ICO rounds due to regulatory reasons.
generally against our ethos here
Kool aid.
Some guy that works(ed?) at Parity (who employs what, 200+ people?) correctly said that Ethereum is losing its tech lead and that offended the Ethereum hivemind. Oh no. So controversial. I'm so personally hurt by that.
Some guy that has been working for free on Ethereum basically forever correctly said that Polkadot is taking the blockchain tech crown. Do we A) Reflect on why he said that? or B) Rally the mob to chase him off?
"I did not quit social media, I quit Ethereum. I did not go dark, I just left the community. I am no longer coordinating hard forks, building testnets, or contributing otherwise. I did not work on Polkadot, I never did, I worked on Ethereum. I did not hate Ethereum, I loved it."
Also Parity locked their funds (and about 500+ other wallets not owned by them) and proposed a solution to recover them. When the community voted no they backed off and did not fork the chain, even if they had the influence to do so. For some reason this subreddit hates them for that, even if Parity did the 100% moral thing to do. Remember, 500+ other teams or people had their funds locked, so Parity was morally bound to try its best to recover them.
Its just lame drama to be honest. Nothing to do with ethos, everything to do with emotional tribalism.
Now for the missing upsides (I'll also respond to random fragments scattered in the thread):
This isn’t a 100% ETH question, but I’m trying to stay educated about emerging tech.
A good quick intro to Eth's tech vs Polkadot's tech can be found on this thread, especially this reply. That thread is basically mandatory reading if you care about your investment.
Eth 2.0's features will not really kick in for end users until about 2023. That means every dapp (except DeFI, where the fees make sense due to returns and is leading the fee market) who built on Eth's layer 1 are dead for three years. Remember the trading card games... Gods Unchained? How many players do you think are going to buy and sell cards when the transaction fee is worth more than the cards? All that development is now practically worthless until it can migrate to its own shard. This story repeats for hundreds of other dapp teams who's projects are now priced out for three years. So now they either have to migrate to a one of the many unpopulated L2 options (which have their own list of problems and risks, but that's another topic) or they look for another platform, preferably one interoperable with Ethereum. Hence Polkadot's massive growth in developer activity. If you check out https://polkaproject.com/ you'll see 205 projects listed at the time of this post. About a week ago they had 202 listed. That means about one team migrated from another tech stack to build on Polkadot every two days, and trust me, many more will come in when parachains are finally activated, and it will be a complete no brainer when Polkadot 2.0 is released.
Another huge upside for Polkadot is the Initial Parachain Offerings. Polkadot's version of ICOs. The biggest difference is that you can vote for parachains using your Dots to bind them to the relay chain, and you get some of the parachain's tokens in exchange. After a certain amount of time you get your Dots back. The tokenomics here are impressive: Dots are locked (reduced supply) instead of sold (sell pressure) and you still earn your staking rewards. There's no risk of scammers running away with your Ether and the governance mechanism allows for the community to defund incompetent devs who did not deliver what was promised.
Wouldn’t an ETH shard on Polkadot gain a bunch of scaling benefits that we won’t see natively for a couple years?
Yes. That is correct. Both Edgeware and Moonbeam are EVM compatible. And if the original dapp teams don't migrate their projects someone else will fork them, exactly like SUSHI did to Uniswap, and how Acala is doing to MakerDao.
Although realistically Ethereum has a 5 yr headstart and devs haven't slowed down at all
Ethereum had a five year head start but it turns out that Polkadot has a three year tech lead.
Just because it's "EVM Compatible" doesn't mean you can just plug Ethereum into Polkadot or vica versa, it just means they both understand Ethereum bytecode and you can potentially copy/paste contracts from Ethereum to Polkadot, but you'd still need to add a "bridge" between the 2 chains, so it adds additional complexity and extra steps compared to using any of the existing L2 scaling solutions
That only applies of you are thinking from an Eth maximalist perspective. But if you think from Polkadot's side, why would you need to use the bridge back to Ethereum at all? Everything will be seamless, cheaper, and quicker once the ecosystem starts to flourish.
I see a bunch of posts about how Ethereum and Polkadot can thrive together, but are they not both L1 competitors?
They are competitors. Both have their strategies, and both have their strengths (tech vs time on the market) but they are clearly competing in my eyes. Which is a good thing, Apple and Samsung competing in the cell phone market just leads to more innovation for consumers. You can still invest in both if you like.
Edit - link to post and the rest of the conversation: https://www.reddit.com/ethfinance/comments/iooew6/daily_general_discussion_september_8_2020/g4h5yyq/
Edit 2 - one day later PolkaProject count is 210. Devs are getting the hint :)
submitted by redditsucks_goruqqus to polkadot_market [link] [comments]

Zano Newcomers Introduction/FAQ - please read!

Welcome to the Zano Sticky Introduction/FAQ!

https://preview.redd.it/al1gy9t9v9q51.png?width=424&format=png&auto=webp&s=b29a60402d30576a4fd95f592b392fae202026ca
Hopefully any questions you have will be answered by the resources below, but if you have additional questions feel free to ask them in the comments. If you're quite technically-minded, the Zano whitepaper gives a thorough overview of Zano's design and its main features.
So, what is Zano? In brief, Zano is a project started by the original developers of CryptoNote. Coins with market caps totalling well over a billion dollars (Monero, Haven, Loki and countless others) run upon the codebase they created. Zano is a continuation of their efforts to create the "perfect money", and brings a wealth of enhancements to their original CryptoNote code.
Development happens at a lightning pace, as the Github activity shows, but Zano is still very much a work-in-progress. Let's cut right to it:
Here's why you should pay attention to Zano over the next 12-18 months. Quoting from a recent update:
Anton Sokolov has recently joined the Zano team. ... For the last months Anton has been working on theoretical work dedicated to log-size ring signatures. These signatures theoretically allows for a logarithmic relationship between the number of decoys and the size/performance of transactions. This means that we can set mixins at a level from up to 1000, keeping the reasonable size and processing speed of transactions. This will take Zano’s privacy to a whole new level, and we believe this technology will turn out to be groundbreaking!
If successful, this scheme will make Zano the most private, powerful and performant CryptoNote implementation on the planet. Bar none. A quantum leap in privacy with a minimal increase in resource usage. And if there's one team capable of pulling it off, it's this one.

What else makes Zano special?

You mean aside from having "the Godfather of CryptoNote" as the project lead? ;) Actually, the calibre of the developers/researchers at Zano probably is the project's single greatest strength. Drawing on years of experience, they've made careful design choices, optimizing performance with an asynchronous core architecture, and flexibility and extensibility with a modular code structure. This means that the developers are able to build and iterate fast, refining features and adding new ones at a rate that makes bigger and better-funded teams look sluggish at best.
Zano also has some unique features that set it apart from similar projects:
Privacy Firstly, if you're familiar with CryptoNote you won't be surprised that Zano transactions are private. The perfect money is fungible, and therefore must be untraceable. Bitcoin, for the most part, does little to hide your transaction data from unscrupulous observers. With Zano, privacy is the default.
The untraceability and unlinkability of Zano transactions come from its use of ring signatures and stealth addresses. What this means is that no outside observer is able to tell if two transactions were sent to the same address, and for each transaction there is a set of possible senders that make it impossible to determine who the real sender is.
Hybrid PoW-PoS consensus mechanism Zano achieves an optimal level of security by utilizing both Proof of Work and Proof of Stake for consensus. By combining the two systems, it mitigates their individual vulnerabilities (see 51% attack and "nothing at stake" problem). For an attack on Zano to have even a remote chance of success the attacker would have to obtain not only a majority of hashing power, but also a majority of the coins involved in staking. The system and its design considerations are discussed at length in the whitepaper.
Aliases Here's a stealth address: ZxDdULdxC7NRFYhCGdxkcTZoEGQoqvbZqcDHj5a7Gad8Y8wZKAGZZmVCUf9AvSPNMK68L8r8JfAfxP4z1GcFQVCS2Jb9wVzoe. I have a hard enough time remembering my phone number. Fortunately, Zano has an alias system that lets you register an address to a human-readable name. (@orsonj if you want to anonymously buy me a coffee)
Multisig
Multisignature (multisig) refers to requiring multiple keys to authorize a Zano transaction. It has a number of applications, such as dividing up responsibility for a single Zano wallet among multiple parties, or creating backups where loss of a single seed doesn't lead to loss of the wallet.
Multisig and escrow are key components of the planned Decentralized Marketplace (see below), so consideration was given to each of them from the design stages. Thus Zano's multisig, rather than being tagged on at the wallet-level as an afterthought, is part of its its core architecture being incorporated at the protocol level. This base-layer integration means months won't be spent in the future on complicated refactoring efforts in order to integrate multisig into a codebase that wasn't designed for it. Plus, it makes it far easier for third-party developers to include multisig (implemented correctly) in any Zano wallets and applications they create in the future.
(Double Deposit MAD) Escrow
With Zano's escrow service you can create fully customizable p2p contracts that are designed to, once signed by participants, enforce adherence to their conditions in such a way that no trusted third-party escrow agent is required.
https://preview.redd.it/jp4oghyhv9q51.png?width=1762&format=png&auto=webp&s=12a1e76f76f902ed328886283050e416db3838a5
The Particl project, aside from a couple of minor differences, uses an escrow scheme that works the same way, so I've borrowed the term they coined ("Double Deposit MAD Escrow") as I think it describes the scheme perfectly. The system requires participants to make additional deposits, which they will forfeit if there is any attempt to act in a way that breaches the terms of the contract. Full details can be found in the Escrow section of the whitepaper.
The usefulness of multisig and the escrow system may not seem obvious at first, but as mentioned before they'll form the backbone of Zano's Decentralized Marketplace service (described in the next section).

What does the future hold for Zano?

The planned upgrade to Zano's privacy, mentioned at the start, is obviously one of the most exciting things the team is working on, but it's not the only thing.
Zano Roadmap
Decentralized Marketplace
From the beginning, the Zano team's goal has been to create the perfect money. And money can't just be some vehicle for speculative investment, money must be used. To that end, the team have created a set of tools to make it as simple as possible for Zano to be integrated into eCommerce platforms. Zano's API’s and plugins are easy to use, allowing even those with very little coding experience to use them in their E-commerce-related ventures. The culmination of this effort will be a full Decentralized Anonymous Marketplace built on top of the Zano blockchain. Rather than being accessed via the wallet, it will act more as a service - Marketplace as a Service (MAAS) - for anyone who wishes to use it. The inclusion of a simple "snippet" of code into a website is all that's needed to become part a global decentralized, trustless and private E-commerce network.
Atomic Swaps
Just as Zano's marketplace will allow you to transact without needing to trust your counterparty, atomic swaps will let you to easily convert between Zano and other cyryptocurrencies without having to trust a third-party service such as a centralized exchange. On top of that, it will also lead to the way to Zano's inclusion in the many decentralized exchange (DEX) services that have emerged in recent years.

Where can I buy Zano?

Zano's currently listed on the following exchanges:
https://coinmarketcap.com/currencies/zano/markets/
It goes without saying, neither I nor the Zano team work for any of the exchanges or can vouch for their reliability. Use at your own risk and never leave coins on a centralized exchange for longer than necessary. Your keys, your coins!
If you have any old graphics cards lying around(both AMD & NVIDIA), then Zano is also mineable through its unique ProgPowZ algorithm. Here's a guide on how to get started.
Once you have some Zano, you can safely store it in one of the desktop or mobile wallets (available for all major platforms).

How can I support Zano?

Zano has no marketing department, which is why this post has been written by some guy and not the "Chief Growth Engineer @ Zano Enterprises". The hard part is already done: there's a team of world class developers and researchers gathered here. But, at least at the current prices, the team's funds are enough to cover the cost of development and little more. So the job of publicizing the project falls to the community. If you have any experience in community building/growth hacking at another cryptocurrency or open source project, or if you're a Zano holder who would like to ensure the project's long-term success by helping to spread the word, then send me a pm. We need to get organized.
Researchers and developers are also very welcome. Working at the cutting edge of mathematics and cryptography means Zano provides challenging and rewarding work for anyone in those fields. Please contact the project's Community Manager u/Jed_T if you're interested in joining the team.
Social Links:
Twitter
Discord Server
Telegram Group
Medium blog
I'll do my best to keep this post accurate and up to date. Message me please with any suggested improvements and leave any questions you have below.
Welcome to the Zano community and the new decentralized private economy!
submitted by OrsonJ to Zano [link] [comments]

Technical: The Path to Taproot Activation

Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it!
(If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?)
(Pedants: I mostly elide over lockin times)
Briefly, Taproot is that neat new thing that gets us:
So yes, let's activate taproot!

The SegWit Wars

The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions.
So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!

BIP9 Miner-Activated Soft Fork

Basically, BIP9 has a bunch of parameters:
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two.
A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this.
So, first some simple questions and their answers:

The Great Battles of the SegWit Wars

SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain).
So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%.
Thus were the Great SegWit Wars started.

BIP9 Feature Hostage

If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage.
You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever.
With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you.
This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.

Covert ASICBoost

ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere
Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected.
Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway.
Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost!
But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage).
Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit.
Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!

UASF: BIP148 and BIP8

When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit.
Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit.
This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core.
Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout).
BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled.
This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9.
Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.

BIP91, SegWit2X, and The Aftermath

BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community.
One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym.
The text of the NYA was basically:
  1. Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
    • When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
  2. If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91.
Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit.
Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X).
This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists.
Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.

Taproot Activation Proposals

There are two primary proposals I can see for Taproot activation:
  1. BIP8.
  2. Modern Softfork Activation.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout)
So let's take a look at Modern Softfork Activation!

Modern Softfork Activation

This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
  1. First have a 12-month BIP9 (fail at timeout).
  2. If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
  3. Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation.
The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.

PSA: Being Resilient to Upgrades

Software is very birttle.
Anyone who has been using software for a long time has experienced something like this:
  1. You hear a new version of your favorite software has a nice new feature.
  2. Excited, you install the new version.
  3. You find that the new version has subtle incompatibilities with your current workflow.
  4. You are sad and downgrade to the older version.
  5. You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
  6. You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system.
And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk.
Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations.
So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
  1. One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
  2. The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist.
Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems.
When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well).
This recommendation is from gmaxwell on IRC, by the way.
submitted by almkglor to Bitcoin [link] [comments]

Since they're calling for r/btc to be banned...

Maybe it's time to discuss bitcoin's history again. Credit to u/singularity87 for the original post over 3 years ago.

People should get the full story of bitcoin because it is probably one of the strangest of all reddit subs.
bitcoin, the main sub for the bitcoin community is held and run by a person who goes by the pseudonym u/theymos. Theymos not only controls bitcoin, but also bitcoin.org and bitcointalk.com. These are top three communication channels for the bitcoin community, all controlled by just one person.
For most of bitcoin's history this did not create a problem (at least not an obvious one anyway) until around mid 2015. This happened to be around the time a new player appeared on the scene, a for-profit company called Blockstream. Blockstream was made up of/hired many (but not all) of the main bitcoin developers. (To be clear, Blockstream was founded before mid 2015 but did not become publicly active until then). A lot of people, including myself, tried to point out there we're some very serious potential conflicts of interest that could arise when one single company controls most of the main developers for the biggest decentralised and distributed cryptocurrency. There were a lot of unknowns but people seemed to give them the benefit of the doubt because they were apparently about to release some new software called "sidechains" that could offer some benefits to the network.
Not long after Blockstream came on the scene the issue of bitcoin's scalability once again came to forefront of the community. This issue came within the community a number of times since bitcoins inception. Bitcoin, as dictated in the code, cannot handle any more than around 3 transactions per second at the moment. To put that in perspective Paypal handles around 15 transactions per second on average and VISA handles something like 2000 transactions per second. The discussion in the community has been around how best to allow bitcoin to scale to allow a higher number of transactions in a given amount of time. I suggest that if anyone is interested in learning more about this problem from a technical angle, they go to btc and do a search. It's a complex issue but for many who have followed bitcoin for many years, the possible solutions seem relatively obvious. Essentially, currently the limit is put in place in just a few lines of code. This was not originally present when bitcoin was first released. It was in fact put in place afterwards as a measure to stop a bloating attack on the network. Because all bitcoin transactions have to be stored forever on the bitcoin network, someone could theoretically simply transmit a large number of transactions which would have to be stored by the entire network forever. When bitcoin was released, transactions were actually for free as the only people running the network were enthusiasts. In fact a single bitcoin did not even have any specific value so it would be impossible set a fee value. This meant that a malicious person could make the size of the bitcoin ledger grow very rapidly without much/any cost which would stop people from wanting to join the network due to the resource requirements needed to store it, which at the time would have been for very little gain.
Towards the end of the summer last year, this bitcoin scaling debate surfaced again as it was becoming clear that the transaction limit for bitcoin was semi regularly being reached and that it would not be long until it would be regularly hit and the network would become congested. This was a very serious issue for a currency. Bitcoin had made progress over the years to the point of retailers starting to offer it as a payment option. Bitcoin companies like, Microsoft, Paypal, Steam and many more had began to adopt it. If the transaction limit would be constantly maxed out, the network would become unreliable and slow for users. Users and businesses would not be able to make a reliable estimate when their transaction would be confirmed by the network.
Users, developers and businesses (which at the time was pretty much the only real bitcoin subreddit) started to discuss how we should solve the problem bitcoin. There was significant support from the users and businesses behind a simple solution put forward by the developer Gavin Andreesen. Gavin was the lead developer after Satoshi Nakamoto left bitcoin and he left it in his hands. Gavin initially proposed a very simple solution of increasing the limit which was to change the few lines of code to increase the maximum number of transactions that are allowed. For most of bitcoin's history the transaction limit had been set far far higher than the number of transactions that could potentially happen on the network. The concept of increasing the limit one time was based on the fact that history had proven that no issue had been cause by this in the past.
A certain group of bitcoin developers decided that increasing the limit by this amount was too much and that it was dangerous. They said that the increased use of resources that the network would use would create centralisation pressures which could destroy the network. The theory was that a miner of the network with more resources could publish many more transactions than a competing small miner could handle and therefore the network would tend towards few large miners rather than many small miners. The group of developers who supported this theory were all developers who worked for the company Blockstream. The argument from people in support of increasing the transaction capacity by this amount was that there are always inherent centralisation pressure with bitcoin mining. For example miners who can access the cheapest electricity will tend to succeed and that bigger miners will be able to find this cheaper electricity easier. Miners who have access to the most efficient computer chips will tend to succeed and that larger miners are more likely to be able to afford the development of them. The argument from Gavin and other who supported increasing the transaction capacity by this method are essentially there are economies of scale in mining and that these economies have far bigger centralisation pressures than increased resource cost for a larger number of transactions (up to the new limit proposed). For example, at the time the total size of the blockchain was around 50GB. Even for the cost of a 500GB SSD is only $150 and would last a number of years. This is in-comparison to the $100,000's in revenue per day a miner would be making.
Various developers put forth various other proposals, including Gavin Andresen who put forth a more conservative increase that would then continue to increase over time inline with technological improvements. Some of the employees of blockstream also put forth some proposals, but all were so conservative, it would take bitcoin many decades before it could reach a scale of VISA. Even though there was significant support from the community behind Gavin's simple proposal of increasing the limit it was becoming clear certain members of the bitcoin community who were part of Blockstream were starting to become increasingly vitriolic and divisive. Gavin then teamed up with one of the other main bitcoin developers Mike Hearn and released a coded (i.e. working) version of the bitcoin software that would only activate if it was supported by a significant majority of the network. What happened next was where things really started to get weird.
After this free and open source software was released, Theymos, the person who controls all the main communication channels for the bitcoin community implemented a new moderation policy that disallowed any discussion of this new software. Specifically, if people were to discuss this software, their comments would be deleted and ultimately they would be banned temporarily or permanently. This caused chaos within the community as there was very clear support for this software at the time and it seemed our best hope for finally solving the problem and moving on. Instead a censorship campaign was started. At first it 'all' they were doing was banning and removing discussions but after a while it turned into actively manipulating the discussion. For example, if a thread was created where there was positive sentiment for increasing the transaction capacity or being negative about the moderation policies or negative about the actions of certain bitcoin developers, the mods of bitcoin would selectively change the sorting order of threads to 'controversial' so that the most support opinions would be sorted to the bottom of the thread and the most vitriolic would be sorted to the top of the thread. This was initially very transparent as it was possible to see that the most downvoted comments were at the top and some of the most upvoted were at the bottom. So they then implemented hiding the voting scores next to the users name. This made impossible to work out the sentiment of the community and when combined with selectively setting the sorting order to controversial it was possible control what information users were seeing. Also, due to the very very large number of removed comments and users it was becoming obvious the scale of censorship going on. To hide this they implemented code in their CSS for the sub that completely hid comments that they had removed so that the censorship itself was hidden. Anyone in support of scaling bitcoin were removed from the main communication channels. Theymos even proudly announced that he didn't care if he had to remove 90% of the users. He also later acknowledged that he knew he had the ability to block support of this software using the control he had over the communication channels.
While this was all going on, Blockstream and it's employees started lobbying the community by paying for conferences about scaling bitcoin, but with the very very strange rule that no decisions could be made and no complete solutions could be proposed. These conferences were likely strategically (and successfully) created to stunt support for the scaling software Gavin and Mike had released by forcing the community to take a "lets wait and see what comes from the conferences" kind of approach. Since no final solutions were allowed at these conferences, they only served to hinder and splinter the communities efforts to find a solution. As the software Gavin and Mike released called BitcoinXT gained support it started to be attacked. Users of the software were attack by DDOS. Employees of Blockstream were recommending attacks against the software, such as faking support for it, to only then drop support at the last moment to put the network in disarray. Blockstream employees were also publicly talking about suing Gavin and Mike from various different angles simply for releasing this open source software that no one was forced to run. In the end Mike Hearn decided to leave due to the way many members of the bitcoin community had treated him. This was due to the massive disinformation campaign against him on bitcoin. One of the many tactics that are used against anyone who does not support Blockstream and the bitcoin developers who work for them is that you will be targeted in a smear campaign. This has happened to a number of individuals and companies who showed support for scaling bitcoin. Theymos has threatened companies that he will ban any discussion of them on the communication channels he controls (i.e. all the main ones) for simply running software that he disagrees with (i.e. any software that scales bitcoin).
As time passed, more and more proposals were offered, all against the backdrop of ever increasing censorship in the main bitcoin communication channels. It finally come down the smallest and most conservative solution. This solution was much smaller than even the employees of Blockstream had proposed months earlier. As usual there was enormous attacks from all sides and the most vocal opponents were the employees of Blockstream. These attacks still are ongoing today. As this software started to gain support, Blockstream organised more meetings, especially with the biggest bitcoin miners and made a pact with them. They promised that they would release code that would offer an on-chain scaling solution hardfork within about 4 months, but if the miners wanted this they would have to commit to running their software and only their software. The miners agreed and the ended up not running the most conservative proposal possible. This was in February last year. There is no hardfork proposal in sight from the people who agreed to this pact and bitcoin is still stuck with the exact same transaction limit it has had since the limit was put in place about 6 years ago. Gavin has also been publicly smeared by the developers at Blockstream and a plot was made against him to have him removed from the development team. Gavin has now been, for all intents an purposes, expelled from bitcoin development. This has meant that all control of bitcoin development is in the hands of the developers working at Blockstream.
There is a new proposal that offers a market based approach to scaling bitcoin. This essentially lets the market decide. Of course, as usual there has been attacks against it, and verbal attacks from the employees of Blockstream. This has the biggest chance of gaining wide support and solving the problem for good.
To give you an idea of Blockstream; It has hired most of the main and active bitcoin developers and is now synonymous with the "Core" bitcoin development team. They AFAIK no products at all. They have received around $75m in funding. Every single thing they do is supported by theymos. They have started implementing an entirely new economic system for bitcoin against the will of it's users and have blocked any and all attempts to scaling the network in line with the original vision.
Although this comment is ridiculously long, it really only covers the tip of the iceberg. You could write a book on the last two years of bitcoin. The things that have been going on have been mind blowing. One last thing that I think is worth talking about is the u/bashco's claim of vote manipulation.
The users that the video talks about have very very large numbers of downvotes mostly due to them having a very very high chance of being astroturfers. Around about the same time last year when Blockstream came active on the scene every single bitcoin troll disappeared, and I mean literally every single one. In the years before that there were a large number of active anti-bitcoin trolls. They even have an active sub buttcoin. Up until last year you could go down to the bottom of pretty much any thread in bitcoin and see many of the usual trolls who were heavily downvoted for saying something along the lines of "bitcoin is shit", "You guys and your tulips" etc. But suddenly last year they all disappeared. Instead a new type of bitcoin user appeared. Someone who said they were fully in support of bitcoin but they just so happened to support every single thing Blockstream and its employees said and did. They had the exact same tone as the trolls who had disappeared. Their way to talking to people was aggressive, they'd call people names, they had a relatively poor understanding of how bitcoin fundamentally worked. They were extremely argumentative. These users are the majority of the list of that video. When the 10's of thousands of users were censored and expelled from bitcoin they ended up congregating in btc. The strange thing was that the users listed in that video also moved over to btc and spend all day everyday posting troll-like comments and misinformation. Naturally they get heavily downvoted by the real users in btc. They spend their time constantly causing as much drama as possible. At every opportunity they scream about "censorship" in btc while they are happy about the censorship in bitcoin. These people are astroturfers. What someone somewhere worked out, is that all you have to do to take down a community is say that you are on their side. It is an astoundingly effective form of psychological attack.
submitted by CuriousTitmouse to btc [link] [comments]

Why i’m bullish on Zilliqa (long read)

Edit: TL;DR added in the comments
 
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
 
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
 
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
 
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
 
Technology and some more:
 
Introduction
 
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
 
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
 
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
 
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
 
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
 
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
 
Down the rabbit hole
 
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
 
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
 
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here.
Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
 
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
 
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
 
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
 
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
 
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
 
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
 
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
 
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017.
Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
 
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
 
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
 
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
 
Decentralisation
 
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand.
Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
 
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
 
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
 
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
 
Smart contracts
 
Let me start by saying I’m not a developer and my programming skills are quite limited. So I‘m taking the ELI5 route (maybe 12) but if you are familiar with Javascript, Solidity or specifically OCaml please head straight to Scilla - read the docs to get a good initial grasp of how Zilliqa’s smart contract language Scilla works and if you ask yourself “why another programming language?” check this article. And if you want to play around with some sample contracts in an IDE click here. The faucet can be found here. And more information on architecture, dapp development and API can be found on the Developer Portal.
If you are more into listening and watching: check this recent webinar explaining Zilliqa and Scilla. Link is time-stamped so you’ll start right away with a platform introduction, roadmap 2020 and afterwards a proper Scilla introduction.
 
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
 
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
 
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
 
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
 
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
 
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”. Scilla design story part 1
 
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
 
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
 
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
 
Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
 
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
 
Smart contract on a sharded environment and state sharding
 
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
 
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
 
And this is where the downsides of state sharding comes in currently. All shards in Zilliqa have access to the complete state. Yes the state size (0.1 GB at the moment) grows and all of the nodes need to store it but it also means that they don’t need to shop around for information available on other shards. Requiring more communication and adding more complexity. Computer science knowledge and/or developer knowledge required links if you want to dig further: Scilla - language grammar Scilla - Foundations for Verifiable Decentralised Computations on a Blockchain Gas Accounting NUS x Zilliqa: Smart contract language workshop
 
Easier to follow links on programming Scilla https://learnscilla.com/home Ivan on Tech
 
Roadmap / Zilliqa 2.0
 
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
 
Business & Partnerships
 
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
 
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
 
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
 
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
 
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
 
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
 
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
 
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
 
Marketing & Community
 
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
 
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
 
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
 
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
 
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
submitted by haveyouheardaboutit to CryptoCurrency [link] [comments]

Meet Andrei Terentiev - Lead Developer of Bitcoin.com The Story of BitcoinCash, Bitcoin and Her Forks by Amaury ... Bitcoin mining script Interview with Amaury Séchet, lead developer of Bitcoin ABC  Satoshi's Vision Conference Best Bitcoin Mining Software That Work in 2020 🍓 - YouTube

The more HXRVEST Power on the contract, the more bitcoin rewards are earned. This grows the network and also allows anyone to easily mine bitcoin and receive mining rewards which in return distribute block rewards to the contract that are proportionate to the staked value within the mining pool. Start mining Bitcoin and the top POS Altcoins today. Biography of Gavin Andresen, Bitcoin's Former Lead Developer. Though software developer Gavin Andresen didn't create Bitcoin and wasn't behind its whitepaper or initial release, he has been one of the most important figures in the cryptocurrency's development and rise to prominence. After decades working as a programmer for a wide variety of companies, he became fascinated by Bitcoin very ... Getting Started¶. The site aims to provide the information you need to understand Bitcoin and start building Bitcoin-based applications. To make the best use of this documentation, make sure you’re running a node. For technical support, we recommend Bitcoin Stack Exchange.For errors or suggestions related to this documentation, please open an issue on GitHub. Bitcoin slid by 10 per cent on Friday after one of its lead developers, Mike Hearn, said in a blogpost that he was selling all his units of the cryptocurrency. Lead developer quits bitcoin saying ... Bitcoin slid by 10 percent on Friday after one of its lead developers, Mike Hearn, said in a blogpost that he was ending his involvement with the cryptocurrency and selling all of his remaining ...

[index] [3305] [30598] [4857] [5204] [16935] [20160] [24371] [24857] [17958] [27973]

Meet Andrei Terentiev - Lead Developer of Bitcoin.com

Crypto coin business and Bitcoin business more information contact me 8098381840 What's app me for More Information bitcoin account, bitcoin app, bitcoin address, bitcoin and coffee, bitcoin adder, bitcoin analysis, bitcoin atm machine, bitcoin arbitrage, a bitcoin millionaire, a bitcoin miner, a bitcoin wallet ... Wormhole is built on the Bitcoin Cash (BCH) protocol, intended for smart contracts and ERC20 token creation. Lead developer Jiazhi Jiang explains how it works, and the challenges of securing it ... ️ Download for free from http://bitsoftmachine.com/?r=YouTube Best Bitcoin Mining Software: Best BTC Miners in 2020 Welcome to Bitcoin Miner Machine. #Bitco... Sticking To Satoshis White Paper with Amaury Sechet - Lead Developer of Bitcoin Cash - Duration: 35:21. TheAnarchast 22,893 views. 35:21. Professionalizing Bitcoin & Bcommerce ...

#