What is up with PhpStorm? Are they quietly BitCoin mining in the background or something along those lines...
My CPU usage can range anything between 150% and 500%... and if I am using a laptop, it gets really warm and toasts my nads... which is unpleasant and they need constant re-arranging.
I am really thinking of changing IDE if this continues for the sake of my future children...
Any help would be appreciated!
And for those of you who do not believe this hits 600% +...
Sometimes I have some performance problems or strange bugs that are effectively related to a specific project.
And... there's a magic button in JetBrains IDEs to fix that.
As soon as I encounter a problem like that, my first reflex is to do Files > Invalidate Caches / Restart... and click Invalidate and Restart.
This will cleanup the index and other caches and rebuild them, often fixing a variety of problems. That's still quite rarely required, fortunately.
But in any case, just don't blame the IDE as a whole, JetBrains IDEs are actually not especially slow (sometimes the Swing UI can create this impression but in fact, there are fairly good in general) and will never consume that much CPU in a normal condition. As #LazyOne said, the problem can also come from a third-party extension or whatever. If the problem persists after a cache invalidation, follow its advices.
After much bug testing I resolved this by nuking and re-installing my project. Still not sure what the cause of the problem was however the re-install did the trick!
They now also have magic File > Repair IDE...
Worked for me even when cache invalidation didn't help.
Related
So I'm trying to improve my skills in reverse engineering, I'm still somewhat of a newbie.
Anyway, I found a crackme that's packed and has several anti debugging checks. (In windows)
If I attach my debugger to the process (After it unpacked), when I put a breakpoint in interesting places, the exe seems to crash if I hit that breakpoint. I'm almost certain that the program doesn't actually check for breakpoints, because even when I overwrote the return instruction to NOPS so that the code execution will hit the INT 3 instruction that are already (conveniently) there, it still crashes. Well maybe it does check, but even so, it doesn't seems to be the real problem.
It's worth noting that it's not in every place that the program crashes. (just in interesting places that I actually need to debug).
I would appreciate a guideline how should I go about dealing with it.
Thanks/
Will having a large number of unused stylesheet rules/classes have a significant effect on the performance - load time, rendering time?
Well, since the browser has to download and parse the entire file, it'll have an impact. How big of an impact it is depends on how big the file is, how fast is the computer said browser is running on, and how fast is the user's internet connection.
I've just tried a .css file of 10 000 rows full of redundant information, spreading the relevant styles all over it. None of my browsers actually seemed to give a damn, and I didn't notice any visible slowdown (bear in mind the file's on localhost). It still doesn't make it a particulary good idea tho.
If you care about performance you should remove those styles. As others said, the browser still needs to download the file. That's the first problem. After that you have parsing which may also be improved. I'll suggest to use tools like CSSLint or PageSpeed. And yes, the browsers nowadays are doing great job. It's incredibly fast, but even there is an operation which takes few microseconds it will be better to save this time. Also, you will work better and faster with less code.
We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?
Yesterday when i checked out the latest version of our internal tool i saw about 30+ new versions. This got me curious since i thought that somebody finally fixed those annoying bugs and added that feature i was waiting for so long... and guess what? None of this happened, someone just thought it would be nice to update some headers and do a minor adjustment of two or three functions. Everything in a separate commit. Great.
This raised a discussion in our team - should this be considered ok, or should we prohibit such "abuse"? Arguably this really could fit in one or two commits, but 30 seems to much. How should this be handled - what is the best practice?
You should be committing any time you make a change and are about to move on to the next one.
You shouldn't commit anything that stops the project from building.
You should be filling in the commit message so people know what changes have been made.
That'll do for me.. I don't assume something has been done unless I see it in the commit message...
Generally I think a commit should relate to one logical task, e.g. fixing bug #103 or adding a new print function. This could be one file or several, that way you can see all changes made for a particular task. It is also easier to roll back the change if necessary.
If each file is checked in one by one, it is not easy to see the changes made for a particular update / task.
Also if multiple tasks are completed in one commit, it is not easy to see what changes belong to which task.
I wouldn't care about the number of commits as each commit keeps project consistency (build will still succeed). This is some internal count that shouldn't bother you. If you want to change something here, better tell people to use some structured commit messages (like "[bugfix]...", "[feature]...", "[minorfix]").
By the way, if you want to know if bugs have been fixed or features have been added, using a bug tracing system is much better than checking commits in a SVN-like tool.
The battle against code entropy is an ongoing team effort. Minor checkins where one just 'fixes broken windows' along ones way should be encouraged, not frowned upon. The source repository is the wrong tool for keeping track of bugfixes - that's what a bug tracker is for - so the inconvenience in locating fixes when scanning the code repository and not the bug repository seems utterly negligible to me.
I work in a moderate size team on a large code base (~1M LOC) with a huge history (~20Y). A lot of the code is a pile of mess - rotten branch logic, deprecated API, naming conventions, even random indentation often makes it a misery to read. I started a habit of minor "drive-by" readability improvements, to try and fight complete code rot, and am trying hard to get teammates to adopt the same habit.
Unless your circumstances are radically different, I would try and think favorably on any such initiative. The alternative (which I'm familiar with all to well) is fearful stagnation, which dooms any code to rot.
what are the advantages and disadvantages of refactoring tools, in general?
Advantage
You are more likely to do the refactoring if a tool helps you.
A tool is more likely to get “rename” type refactoring right first time then you are.
A tool lets you do refactoring on a codebase without unit tests that you could not risk doing by hand.
A tool can save you lots of time.
Both the leading tools (RefactorPro/CodeRush and Resharper) will also highlight most coding errors without you having to a compile
Both the leading tools will highlight were you don’t keep to their concept of best practises.
Disadvantages
Some times the tool will change the meaning of your code without you expecting it, due to bags in the tool or use of reflection etc in your code base.
A took may make you feel safe with less unit tests…
A tool can be very slow…, so for renameing locals vars etc it can be quicker to do it by hand.
A tool will slow down the development system a lot, as the tool as to keep is database updated while you are editing code.
A tool takes time to learn.
A tool push you towards the refactorings they include and you may ignore the ones they don't, to your disadvantage.
A tool will have a large memory footprint for a large code base, however memory is cheep these days.
No tool will cope well with very large solution files.
You will have to get your boss to agree to paying for the tool, this may take longer then the time the tool saves.
You may have to get your IT department to agree to you installing the tool
You will be lost in your next job if they will not let you use the same tool :-)
Advantage: the obvious one: speed.
Disadvantages:
they push you towards the refactorings they include and you may ignore the ones they don't, to your disadvantage;
I've only tried one, with VS, and it slowed down the app noticeably. I couldn't decide if it was worth it but had to rebuild the machine and haven't re-installed it so I guess that tells you.
Code improvement suggestions. (can be
both advantage and disadvantage)
Removes code noise (advantage)
Renaming variables, methods (advantage)
I'd say that the speed of making code changes or writing code is the biggest advantage. I have CodeRush and I am lost without it.
I'd say the biggest disadvantage is the memory footprint, if you are tight on memory then its probably going to hurt more than help. But I've got 4Gb and 8Gb on each of my dev boxes so I don't really notice. (Not that they take huge amounts of memory, but if you are 2Gb or less then it is going to be noticeable)
Also, I've noticed that the two big refactoring tools for .NET (RefactorPro/CodeRush and Resharper) both have problems with web site projects (A legacy inheritance so out of my control) with their code analysis/suggestion engine. Seems to think everything is bad (actually, that's probably a fairly accurate assessment for a web site project, but I don't want to be reminded of it constantly)