Refactoring is the process of making code more readable and susceptible to change. It is obvious, therefore, that defactoring must be its opposite. This was my assignment for today. Earlier this week, my code failed review (what a surprise). The problem was with a certain construct I had used to compensate for the fact that ECMAScript doesn't support access control with keywords like "private" or "public" like other languages do, but it does it differently. His point is that access control is subversive, and I should rewrite it using .prototype, so that all members are public, but clarify through comments, documentation, and/or some naming convention, that the "private" ones are only to be used by objects of this class. I might add that this is not some subversive trick that I made up completely in my head, but that it is found in our Bible on these matters. Of my code, it "assumes it will be slower" than proposed. rewrite. This is not a completely unreasonable hypothesis, but based exclusively on assumptions. This is almost a textbook example of premature optimization, it's not a piece of core functionality that will be called 1000 times per second, this is code that will probably run 0 to about 20 times at startup, depending on inputs. The only thing that prevents this from being a true premature optimization is that premature optimization at least reduces execution times by a few nanoseconds. Quite sensibly (or so I thought), in his review comments, my nemesis said that I should feel free to disagree, and prove what I had done, and the matter can be looked at again. I did exactly that, posting my response about half an hour after its initial review, even making sure that the words... in the middle of the paper... muster the enthusiasm to submit it for review, knowing it will fail. “We shouldn't do anything that we think will cost performance” a) I don't think it will, and I was right. b) When did that edict arrive? Most of our code appears to have been written with the express purpose of being as slow as possible. Changes like this, even if his hypotheses turned out to be correct, would have a negligible effect due to the general stupidity of the architecture (a future post in its own right) I have also been chastised for doing this profiling, as, again, when you have a lot To do this, the best way to make progress is to pull random hypotheses from where the sun doesn't shine, implement them, and declare them a success. Collecting data so you can understand where to direct your efforts or measure your success is, it turns out, a huge waste of time.
tags