I'm using the Gist plugin for Sublime Text (2 in my case) from Condemil.
I've been using it to pull in commonly used code snippets which I store as Gists and it's been working perfectly until recently.
It still pulls in gists, but only the recent ones, and a lot of my older ones aren't being pulled in - they're completely ignored.
In the settings, I have it pulling in everything, not just starred gists, so that's not the issue. I also know that it can only pull in 100 gists at a time (the github API limit apparently) but it's not even doing that. I barely have more than 100 gists and it's not pulling in almost half of them.
Any thoughts?
Help is much appreciated.
Related
I am using grails 2.3.4 and mysql is mysql:mysql-connector-java:5.1.24' and there are 163 gsp files, everytime when I run script as war or any other to create war file it shows following error
.Error
|
WAR packaging error: encoded string too long: 70621 bytes
and there is no any gsp file more than 64kb and I have already commented grails.project.fork in buildconfig.groovy but still I am getting problem please help.
I doubt that this is the answer you want to see :) I can't imagine that you have a good reason for being anywhere near the max size of a GSP. You shouldn't even know what the number is, only that it's way higher than you would ever need it to be.
You've either got a ton of code or a ton of HTML (or both) in these gigantic pages. There are plenty of obvious strategies for putting your GSPs on a diet. Use taglibs to move a lot of the code (which should not be used at all in a GSP, this isn't PHP) out of the view rendering tier and into the controller and service tiers where it belongs. You can extract static and mostly-static HTML blocks to includes/templates.
There's probably a lot of duplicated work here too - it's difficult to get this many files this large without a significant amount of copypasta. As a file gets very large it gets very hard to maintain an overall sense of what's where - our brains can only handle a certain amount of data before overloading. You also tend to start misplace small objects and partially eaten lunches in there, and that just makes things worse.
If you don't have the time for the significant refactoring this project likely needs if you've gotten this far off track, even a quick simple move to taglibs and templates without much thought about properly engineering the work would get things going. At least until you hit the limit again :)
I recently installed Emmet on sublime text 2 and since then I have been noticing a lot of slowness when working with large files.
One file I am working with has 1500 lines and whenever I hit "tab" after typing an html/tag short cut Sublime Text 2 becomes unresponsive for about 10-15 seconds...
When I work with smaller files, this is not an issue at all. When I uninstall Emmet/PyV8 performance on the larger files returns to normal.
I have searched here and other forums and haven't found much on subject but was wondering if there some other plugin/setting I'm missing?
Thanks in advance.
I've just spent a while this morning trying to revive my installation which becomes slow after a period of time.
I eventually succeeded by reverting to a fresh state, renaming (moving) the data folder. The packages you have downloaded can be copied across from the renamed old data folder - don't forget package manager.
A little cross-testing seems to indicate that the culprits were the session file in Settings and the project files themselves, so create new project files.
Sometimes ST2 slows down because the Data/Settings/Settigs.sublime_session gets too big to handle, take a backup of that file and remove it, sublime will recreate it and it will be like 2kb or something ( the new one) Sometimes the session file gets too big, it stores recently accesssed files, find history, replace history etc. etc. you can take a look at it, its a text file.
all the best.
For "native" Google documents (maybe it applies to all Google formats, I haven't checked), using head in the revisions.get method returns the first revision instead of the latest one. It seems that it only relies on the order of the revisions returned by the revisions.list method, and that order is not really uniform (e.g. a Google document lists the latest first).
This looks like a bug, I managed to reproduce it. I reported it to the engineers and I'll update this answer as soon I hear back from them.
UPDATE 12/2: this is now fixed and the fix will be live in 1-2 weeks
I recently installed the latest version of mediawiki, and it's more or less running fine. However, whenever I try and post what I might consider a "large" entry, I get an error that says I cannot write to index.php, and so the post fails. I have looked though a lot of the documentation, including the variables settings, and cannot seem to nail down the issue or solution. Is it possible that some of the characters in the post are preventing the post? Or, is there a limit to the amount of text content (characters or total size)? Any help would be greatly appreciated!
Mark
For starters, check that $wgMaxArticleSize is greater than what you are trying to post. Even in this case, though, you should get an error message, not an outright failure. The content of the post is unlikely to cause problems, MediaWiki is UTF-8 safe.
Run through the checklist here as well: http://www.mediawiki.org/wiki/Manual:Errors_and_symptoms
Have you tried writing the text in a text editor and then pasting it into mediawiki in smaller chunks, saving the page then pasting another piece? As long as you don't want to do this too often this could be significantly easier than trying to solve the problem.
Any ideas ?
I think the original source was a goldmine database, looking around it appears that the file was likely built using an application called ACT which I gather is a huge product I don't really want to be deploying for a one off file total size less than 5 meg.
So ...
Anyone know of a simple tool that I can run this file through to convert it to a standard CSV or something?
It does appear to be (when looking at it in notepad and excel) in some sort of csv type format but it's like the data is encrypted somehow.
Ok this is weird,
I got a little confused because the data looked a complete mess, in actual fact the mess was the data, that's what it was meant to look like.
Simply put, i opened the file in notepad, seemed to have a sort of pattern so i droppped it on excel.
Apparently excel has no issues reading these files ... strange huh !!!
I am unaware of any third party tooling for opening these files specifically, although there is an SDK available for C# which could resolve your problem with a little elbow grease.
The SDK can be aquired for free Here
Also there is a developer forum which could provide some valuable resources including training material with sample code Here
Resources will be provided with the SDK
Also, out of interest since ACT is a Sage product have you any Sage software floating about which you could attempt to access the data with? Most offices have!
Failing all of the above there is a trial available for ACT! Here!
Good luck with your problem!