The greatest strength of phpMyAdmin (IMHO) is that it seemed to strike the perfect balance between GUI comfort and raw coding of SQL.
I am currently on version 4.1.13, and recent versions have tended to try to introduce a few jQuery UI effects and AJAX.
Now I am having serious trouble with this scenario:
To edit an index, a UI dialog is called up. After save, all is well, but the problem is, the SQL statement(s) invoked is flashed for only a fraction of a second, then slideUped out of view.
If you're like me, you sometimes want to copy out this SQL code, to be run on another schema, or for whatever reason.
I thought I could find a setting to turn off some of these UI effects, but naah!
So, is there a way I can turn off the sliding away of the last used SQL? Even a hack would be okay for me right now.
Thank you.
Version 4.1.13 is outdated, please try the latest stable version.
Related
The title says it all, really.
On a Sun OS 5.1, can I automate some operations on a legacy Progress (ported to v.10, but still running as a character based UI, launched by Procedure Editor) using Expect?
Anyone had any real experience with this setup? Is it known to work? Or not to work at all? Any caveats?
Yes. I've done it.
It was quite a while ago but it is certainly possible.
No special caveats specifically due to Progress. It was challenging to automatically navigate through a complex application -- but you would expect that (pun intended...)
As I recall the hardest part was coming up with distinct "anchor" strings for each screen.
It was also useful to build in a bit of wait time here and there.
Before anyone has a chance: Yes, i know it's a bad idea. Please, don't give me a lecture on how i should use a web service instead. Thanks.
So, how could this be done?
I found this bit http://www.karlkraft.com/index.php/2010/09/17/mysql-for-iphone-and-osx/ and thought it might do the trick. I got a bunch of ARC error messages, cleaned those out and got this error at runtime:
Detected an attempt to call a symbol in system libraries that is not
present on the iPhone: pthread_cond_init$UNIX2003 called from function
my_thread_init in image oms.
Do i need to use something like ODBC/C?
I know that the solution might be a lengthy one, that's fine. Would be great if someone could at least point me in the right direction.
EDIT:
Since people are keen to know the reason for opting not to use a web service, here it is:
If you're creating an in-house app, the added security of a web service is next to nothing. Working directly with the DB means i need to maintain less code. Plus i don't need to create hacky PHP scripts to get things done.
FINAL CONCLUSION:
I wanted to leave a message for people who're about to do the same thing: Don't :)
Essentially your options are hacky server side scripts or Oracle proprietary mysql client you built yourself (and thus a hacky solution as well). Your choice but i'd strongly advice against it.
This might be the sort of thing that you are looking for:
mysql for iphone and osx
I found it on this iphonedevsdk thread access mysql remote database iphone
Personally I would be only doing this if you really really wanted to.
If you wanted a canned solution, I also found this: Flipper
Or to do it yourself: Build MySql client library for iPhone/iPad
Its not really that hard to find a number of solutions
I needed the same thing (I understand your lecture-pain ;) ) so I wrote this: https://github.com/ciaranj/MySqueakQl it doesn't link to the mysql client libraries so no GPL issues, but it is a very minimal ... very 'fresh' i.e. untested implementation ... just my 2c.
I faced the same problem as you did. I searched and find this.
In http://www.acapela-for-iphone.com/ios-4-2-gm-small-problem-with-simulator
Jean-Michel Reghem Says:
"It seems that Apple changes (again) something into the simulator (as in iOS 4.0)."
Also some people in that page say that this problem didn't show up in device, you can try.
The author has updated his code, and it worked.
Here is the link: http://www.karlkraft.com/index.php/2011/06/07/mysql-for-iphone-and-osx-version-2-0/
Today my co-worker noticed that when adding a decimal place to a progress indicator leads to the impression that the program is running faster than without. (i.e. instead of 1,2,3... it shows 1, 1.2, 1.4, 1.6, ...) I checked it and I was surprised that I got the same impression even though I knew it was faked.
That makes me wonder: What other things are there to create the impression of a fast application?
Of course the best way is to actually make the application faster, but from an algorithmic point of view often there's not much you can do. Additionally I think making a user less frustrated is a good thing, even though it is more or less a psychologic trick.
This effect can be very dramatic: doing relatively large amounts of work to give users a correct and often updating status of progress can of course slow down the actual running time of the application (screen updates, progress display needed calculations, etc) while still giving the user the feeling it takes less time.
Some of the things you could do in GUIs:
make sure your application remains responsive (resizing the forms remains possible, perhaps give a cancel button for the operation?) while background processing is occurring
be very consistent in showing status messages/hourglass cursors throughout the application
if you have something updating during an operation, make sure it updates often (like the almost ridiculous showing of filenames and registry keys during an install), or make sure there's an option to make it do this for users that like this behavior
Present some intermediate, interesting results first. "We've found 2,359 zetuyls matching your request, we're just calculating their future value".
I've seen transport reservation systems do that sort of thing quite nicely.
Showing details (such as the names of files being copied in an installation process) can often make things seem like they're going faster because there's constant, noticeable activity (as opposed to a slowly-creeping progress bar).
If your algorithm is such that it generates a list of results, and you have some way of displaying results as they're generated (as opposed to all at once at the end), do so - the sooner the user has something else to look at besides a spinner, the better.
Allow the user to do something else, while your application is processing data or waiting for a result. In application-scope you could allow to do some refinement of a search query or collect information for preparing next steps. Or just present some other "work" necessary to do or just some hints, documentation, statistics, entertainment..
Use one of those animated progress bars which look like they are doing something even when they aren't progressing. Also, as peSHIr said - print each filename that you copy and update it really fast - you could even fake it by cycling through a large string array N times a second.
I've read somewhere that if the process seems to be speeding up, it seems to be faster than when it's progressing at a steady pace. I can't find the reference right now, but it should be simple to implement.
(10 minutes later...)
A further look down Google lane unearthed the following references:
http://www.azarask.in/blog/post/hacking-memory/
http://blogs.msdn.com/time/
Here is an article about "Expressing time in your UI" and user perception of time. I do not know if it is exactly what you expect as an answer, but it is definitely worth the read.
Add a thread sleep at critical points. With each passing version, reduce the delay.
I just got a mail saying that I have to change a config value at 2009-09-01 (new taxes). Our normal approach for this would be to to awake at 2009-08-31 at 23:59 and then just change the value manually. Which not is a big problem since this don't happens to often. But it makes me wonder how other people handle issues like this.
So! How do you handle date specific config changes?
(We are working in asp.net but I don't think this has to be language specific)
Br
Carl Bergquist
I'd normally store this kind of data in a database table like this
Key, Value, EffectiveFrom, EffectiveTo
-----------------------------------------
VAT, 15.0, 20081201, 20091231
VAT, 17.5, 20100101, NULL
I'd then use the EffectiveFrom and EffectiveTo dates to chose the value that is effective at the given time. If the rate is open ended then the effecive to could either by NULL or 99991231.
This also allows you to go back without having to change the config. E.g. if someone asks you to recalculate the tax for the previous month before the rate change.
In linux, there is a command "at" for batch execution.
See "man at" for details.
To be honest, waking up near the time and changing it seems to be the simplest and cheapest approach. All of the technical solutions are fine, but it depends where you work.
In our environment it would be cheaper and simpler to get someone to wake up and make the change than to redevelop the functionality of a piece of software that already works. It certainly involves less testing, development overhead and costs which means we would tend to solve the problem as you do, manually.
That depends totally on the situation and the technology.
pjp's idea is good, if you get your config from a database, or as metadata to define the valid time for whole config sets/files.
Another might be: just prepare a new configfile with the new entries and swap them at midnight (probably with a restart of the service/program) whatever.
Swapping them would be possible with at (as given bei Neeraj) ...
If timing is a problem you should handle the change, or at least the timing of the change on the running server (to avoid time out of synch problems).
We got same kind of problem some time before and handled using the following approach.
this is suitable if you are well known to the source that orginates the configuration changes..
In our case, the source exposed a webservice (actualy a third party) which will return a modified config details. And there is a windows service running on our server which keeps on polling the webservice and will update the configuration file if there is any change.
this works perfectly in our case..
You can make use of this approach by changing the polling webservice part to your source of config change (say reading changes from some disk path). But am not sure how this is possible reading config changes from email.
Why not just make a shell script to swap out the files. run it in cron and switch the files out a minute before and send an alert text if NOT successful and an email if successful.
This is an example on a Linux box but I think you get the point and can do this on a Windows box.
Script:
cp /path/to/old/config /path/to/backup/dir/config.timestamp
cp /path/to/new/config
if(/path/to/new/config exsits) {
sendSuccessEmail();
} else {
sendPanicTextAlert();
}
cron:
59 23 31 8 * /path/to/script.sh
you could test this as well before hand just point to some dummy directories and file
I've seen the hybrid approach. Instead of actually changing the data model to include EffectiveDate/EndDate or manually changing the values yourself, schedule a script to change the values automatically. Also, be sure to have a solid test plan that will validate all changes.
However, this type of manual change can have a dramatic impact on reporting. If previous transactions join directly to the tables being changed, numbers in historical reports could change in a very bad way. There really is no "right" answer.
If I'm not able to do something like pjp's solution, I'd use either a scheduled task or a server job to update it automatically at the right time.
But...I'd probably still be awake checking it had worked.
Look the best solution would be to parameterise your config file and add things like when a certain entry should be used from. This would negate the need for any copying or swapping of files and your application would simply deal with it. (That goes for a config file approach or a database)
If you cannot change the current systems and you have to go with swapping the config files, then you also have two options:
Use a scheduled task to kick off a batch job or even a VBScript or PowerShell script (which ever you feel comfortable with) Make sure you set up the correct credentials to be able to do this at the middle of the night and you could also add some checking and mitigation into this approach.
Write a windows Service that does this for you. Here you have all the flexibility you need. Code it to do whatever it needs to do, do all the checks you need to (so that you can keep sleeping rather than making sure it actually worked) etc, etc. You service would then even take care of the scheduling aspect and all will be good. Here you could use xml DOM object and xPath and not replace the file, but simply update the specific entries as required.
Remember that any change to the config file would cause your site to restart, so make sure you take care of all the other housekeeping stuff that this could cause. (Although this would be exactly the same if you where sitting there in the middle of the night copying file around)
Does anyone have any experience of introducing FxCop to legacy code? We would like to have our build fail if anyone introduces code that violates rules. But for the time being, this is impossible, as the legacy code has over 9000 violations.
The only way to suppress errors I know of is through the SuppressMessage attribute, but that only works on methods, and the GeneratedCodeAttribute. This last one could be used for classes and namespaces (if I recall correctly), but shouldn't be used for non-generated code (see here).
Right now, we take some time each day to remove violations, but new ones keep being introduced, because our build won't fail.
Any ideas?
I have been in a similar situation. I started using FxCop on an existing project some time ago, and had quite a few errors at the start. What I did was to turn off all the rules, then turn on one group at a time, resolving errors as I went.
The Security and Performance groups are a good place to start - they helped me find issues I was not aware of before. Some of the rules are subjective, and may not fully apply to your project, if at all. For example, if internationalization is not an issue, then leave that group turned off. If there are specific rules that do not apply to you, such as naming rules, then turn them off.
If you manage to clear out a set of errors for a certain rule, you can set the build to fail if that rules is violated in the future. So no new errors will creep in.
If it's a project of some size, just go a rule at a time, review the rule's relevance/importance, and either fix the errors or turn the rule off if it does not apply.
Start by asking yourself this: Are you willing and able to change the legacy code to conform with FxCop rules? Or to put it differently: Is this the best way to spend your time?
If you are willing to spend the time and effort, start by picking the small handful of rules you find most important for the overall quality and implement those. If this is helpful you can add a few rules, fix code and so forth.
In my experience there's no big bang approach for implementing FxCop rules and the like. The only feasible way is to take small chunks at a time.
You can add exceptions for old violations in a FxCop project. That way you won't need to add any attributes to your existing code, and you will get warnings about all new violations.
To do that create a project in the FxCop GUI, run analysis with your rules, then in results view select violation you want to ignore for now. Right-click and choose "Exclude". Selected warnings will move to "excluded in project" tab. When you are ready to get back and fix them, select and click "mark as active".
These exclusions are stored in the .FxCop file.
Still, I'd recommend to introduce rules gradually, to smooth learning curve for everyone.
How about the following approach:
Run FXCop with all rules on that are relevant for your project
Save results as a baseline
Develop new code
Run FxCop
Remove all results from the baseline
this will result on fxcop checks on your new code....