z/OS/ MVS: How to 'touch' a dynamic set of files - mvs

In MVS, I'm looking for a UNIX-like touch command to keep thousands of files 'alive' on a seldomly used system. I have a list of every file name that might exist at any one point in time, but the actual files that exist on catalog can come and go depending on what is running on the system.
HRECALL doesn't work becuase the files are huge, and cannot be allowed to migrate off catalog.
IEBGENER dummy copies don't work because it fails if any of the files are missing.
Is there a 'touch' command that won't fail on missing files?
Thanks!

The LISTDSI function in REXX (or CLIST) fits the bill. It will update the last referenced date, which is what HSM uses for migration.

Related

PHPMyAdmin issue: Is it advisable to assign a very large value to `$cfg['MaxExactCount']`?

I have a MySQL table with exactly 1 million rows of dummy data for testing purposes. However when I click the table's "browse" option, this is what I get:
Notice how it says there are only 994,622 records whereas I know there are more.
Funny thing is I can still retrieve those invisible records if I perform a search. For instance, the PHONE field is filled with numbers incrementing from 00000000 through 01000000; thus, the row containing the value 00999999 should exist. And it does:
However, as per the "Browse" screen, the last row in view is the one with the value "00994625." Here's a screenshot that shows the last record; do note the absence of any "next" arrow/link:
I haven't tried exporting so not sure if all records will export. I managed to fix this problem by adding $cfg['MaxExactCount'] to my config.inc.php file. However, I am concerned if this is advisable. I am anticipating my table to grow indefinitely once live and could eventually wind up with several millions of rows. Would it be alright if I just assigned a very large value, say 10000000000, to $cfg['MaxExactCount']? What are the pitfalls and how to avoid them? Also, do shared hosting providers generally allow one the access to alter this particular file?
Yes, you've correctly diagnosed the problem, at least it seems that way. As you've probably seen in the documentation, the MaxExactCount directive is basically used to configure at what point phpMyAdmin uses a faster but inexact row count.
Whether it's a good idea to change it is pretty much up to you. The problem would be performance based and depends on your situation (table structure, server specs and load, etc) but it's worth trying to see what happens.
If you're able to modify config.inc.php then go for it; some hosts provide phpMyAdmin by putting a copy of the phpMyAdmin folder in your web root (in which case you'll be able to modify config.inc.php directly), others use one centrally located copy and configure the web server to make it appear under your domain (in which case you probably won't even be able to find the phpMyAdmin folder, much less modify it). If they don't give you access, you can easily install your own phpMyAdmin instance to your own web root, giving the folder a different name from what your provider uses so as to not conflict, and make whatever modifications you want.

SSIS - File system task, Create directory error

I got an error after running a SSIS package that has worked for a long time.
The error was thrown in a task used to create a directory (like this http://blogs.lessthandot.com/wp-content/uploads/blogs/DataMgmt/ssis_image_05.gif) and says "Cannot create because a file or directory with the same name already exists", but I am sure the directory or a file with the same name didnĀ“t exist.
Before throwing error, the task created a file with no extension named as the expected directory. The file has a modified date more than 8 hours prior to the created date wich is weird.
I checked the date in the server and it is correct. I also tried running the package again and it worked.
What happened?
It sounds like some other process or person made a mistake in that directory and created a file that then blocked your SSIS package's directory create command, not a problem within your package.
Did you look at the security settings of the created file? It might have shown an owner that wasn't the credentials your SSIS package runs under. That won't help if you have many packages or processes that all run under the same credentials, but it might provide useful information.
What was in the file? The contents might provide a clue how it got there.
Did any other packages/processes have errors or warnings within a half day of your package's error? Maybe it was the result of another error. that you could locate through the logs of the other process.
Did your process fail to clean up after itself on the last run?
Does that directory get deleted at the start of your package run, at the end of your package run, or at the end of the run of the downstream consumer of the directory contents? If your package deletes it at the beginning, then something that slows the delete could present a race condition that normally resolves satisfactorily (the delete finishes before the create starts) but once in a while goes the wrong way.
Were you (or anyone) making a copy or scan of the directory in question? Sometimes copy programs (i.e. FTP) or scanning programs (anti virus, PII scans) can make a temporary copy of a large item being processed (i.e. that directory) and maybe it got interrupted and left the temp copy behind.
If it's not repeatable then finding out for sure what happened is tough, but if it happens again try exploring the above. Also, if you can afford to, you might want to increase logging. It takes more CPU and disk space and makes reviewing logs slower, but temporarily increasing log details can help isolate a problem like that.
Good luck!

How to replace folder/filenames in bulk in Magento

I've just inherited a Magento site from a web development agency. I've migrated it across and it all seems to be working fine, aside from some images are missing.
I've worked out that it's because the previous developers have uploaded files into folders with case-sensitive names. So for example, inside the media/catalog/product/ folder, they have two folders name /s/ and /S/
Their system obviously allows these case-sensitive file names. Mine does not!
What has happened is that, as the files were copied across to my server, the system has combined /s/ and /S/ into one folder, and given it the uppercase name (/S/). The problem is then that Magento tried to reference some images at /s/ and of course it can't find them, as on my server that folder does not exist.
I have lots of folders where this has happened, amounting to thousands of product images.
Does anyone know how to get around this? Is it possible to change the server settings to accept case-sensitive folder and file names? Or do I need to go through the database and do some sort of REGEX to replace all lowercase folder names with uppercase folder names?
In the latter case, does anyone know how to do that and which database tables are involved?
Thanks in advance for your help!
First correction to a misconception --> I've worked out that it's because the previous developers have uploaded files into folders with case-sensitive names.
They didn't do this, this is how Magento stores files that it has processed and moved to its media directories.
Magento expects to be run on an operating system that fully understands and uses case on file names. *nix and Mac OS-X use this natively, Windows understands it in a sort of Compatibility Mode. Your major issue comes about with what you use as a transfer method between the different OS's that corrupts it.
The easiest way is to ask them for a zipped tarball and then use Windows 7-zip to dearchive it on your Windows system. There's too much work involved to mess around with correcting this after the fact, use a transfer method from their server to yours that preserves case.

%appdata% for MS Access installs

David Fenton recently mentioned in another thread that
"The only proper place for any Access app (since Windows 2000, in fact) is the folder the %AppData% environment variable points to."
I greatly respect David's knowledge, especially in all matters relating to Access, but I'm confused by this statement.
What is the advantage of following this advice, especially in an environment where you are going to have multiple people using the same computer to access your app?
Won't installing to this folder only install the app for one user? And if this is true, won't installing your app multiple times leave multiple, separate copies of your app on the machine? Hard drive space is cheap these days, but I still don't want a front end file and other supporting files (graphics, Word and Excel templates, etc.) copied multiple times onto a machine when one copy will do.
What are your thoughts? Am I missing something key to understanding David's advice?
Yes, this is an issue but the only way around it is, assuming the IT admins allow it, to create a folder in the root of C drive and install the Access FE database file in that folder. That said I'd stil use the Application Data folder even if files are duplicated. As you state hard drives are cheap.
This assumes you don't mean a Terminal Server/Citrix system where users are simultaneously logged into the system.
First off, this is an issue only for a workstation that has multiple users logging on to it. That's pretty uncommon, isn't it?
Second, you admit there's no issue with disk space, so the only real issue is keeping the front end up-to-date, and that issue is really completely orthogonal to the question of where the front end is being stored.
That issue can be addressed by using any of a number of solutions that automatically copy a new version of the front end when the user opens it (if needed). Tony Toews's Auto FE Updater is the best solution I know of. It's quite versatile and easy to use, and Tony's constantly improving it.
So, in short, I don't think there's any issue here at all.
If everything is always the same for every user on a given machine, then multiple copies of a file may not be such a good idea. But when that one exception occurs, you've painted yourself into a corner. They may need a different template version for example.
You seem to be in a rare situation for an Access developer.
You're running into a bit of an issue here, because you're thinking about the environment variable name %appdata%. That variable stores the directory returned by SHGetSpecialFolderPath(CSIDL_APPDATA).
What you're looking for is the directory returned by SHGetSpecialFolderPath(CSIDL_COMMON_APPDATA). There's no environment variable for that directory. This directory is (as the name indicates) common to all users.
The advantage of David's method is that the Access data is protected by NTFS access rights, when it's in CSIDL_APPDATA. A user can only delete his copy. In CSIDL_COMMON_APPDATA, anyone can delete the single shared copy.
It's probably always best to put these advice and tips into perspective. The assumption being made here is if your application is going to be utilized in a multi user mode (that means more than one user in the application of the same time), then it's pretty much assumed that your applications going to be split into two parts. The so called application part (front end), and then the data file only part, or so called backend part.
So, you have a FE and a BE.
In this environment, each individual user within your office will have their own copy of the application placed on their workstation. The BE (data file) is thus assumed to be placed on some share folder on a server.
In the case we're not going to have multiple users running this application, or the application is not really under development, then you really don't need to split your application into two parts. However if you split your application, it means all of your users can safely work and utilize your application while you work on a copy of the next great version of this application release. Without a split environment, you really can't have any workable development cycle.
It is a long time and honored suggestion that if you're going to use access in a multi user environments, each individual user must have a copy of the front end application placed on each individual computer. If you avoid this suggestion, the end result is instability in the general operation of your application.
I have an article here that explains on a conceptual level and doesn't just tell you two split your application, but explains well why you should split your application:
http://www.members.shaw.ca/AlbertKallal/Articles/split/index.htm

Should I use registry or a flat file to save a program's state?

We have a lot of products that are saving their "states" on the registry.
What is the best practice on saving program states? What are the advantages/disadvantages of saving program states as a registry entry or saving program states to a flat file such as XML?
Thanks!
The obvious awswer would be that storing those states in a normal file, makes it easier for users to backup/restore the state manually.
Also consider that the registry has some keys that are special for each user in the system.
I think registry is the best option to store user-specific information that can be discarded and recovered easily (eg, the last username used to login). Other data should be in a settings file that can be backed-up.
For years programmers had their app settings stored in config files. Then the times changed, and for years they used the registry instead - many of them used it badly, and it caused issues when Vista and its UAC came on the scene.
Nowadays, especially in the .Net world, Windows developers are moving back to storing stuff in config files again. Personally i think that is the best way, if you need to move your app to another machine, or reinstall your OS, all you have to make sure you do is save your config file to retain your settings.
There are things that you may still want to store in the registry though, such as (encrypted) licencing info. For everything else, config files are good. Do pay attention to UAC and file virtualisation though, so that you don't run in to trouble further down the track.
Personally I'd go for the flat file.
(I am assuming that "registry" means windows registry?)
A flat file allows you (or even the user) to inspect and eventually even modify manually the values.
Depending on your situation this could be helpful for debugging, repairing mis-saved data etc.
Unless you thing you want to have the data to be "opaque" and therefore "hard to find/manipulate", the registry offers little in terms of benefits. Maybe it's faster, but if you have lots of state to save you better use an embedded DB instead of a flat file.
I used to follow Redmond doctrines. My programs used .INI files. Then I dutifully switched to the registry - and users started complaining. So, I bucked the trend and switched back to .INI files.
Some want to edit them (good/bad?). Some want to back them up, or transfer to a new machine. Some don't want to lose them if they reinstall windows.
AS a user, I have multiple partitions. Windows/programs/data/swap (and a few others). No programs go onto c:\program files, they all go into the programs partition. No data which I can control goes into c:\user data, it all goes into the data partition (use tweakui power toy, or regedit to change the defaults (but not all programs are well behaved and read the registry for those paths - some just hard code them)).
Bottom line - when Windows gets its panties in a fankle, I do a total re-insatll (approx every three months), and I format the C: drive.
By formatting the windows partition, I get a clean install. My data and programs are safe, though I may need to reinstall a few programs, which is why I go with portable versions where at all possible.
Imo, the registry is the biggest evil ever perpetrated on Windows - a single point of failure.
My advice? Locally stored config files. INI if the user is allowed to edit, serialized or binary format if not.
Or, you could offer a choice ...
Personally I go for a flat file, whether it's an INI file or XML file makes no difference to me. However in my line of work, we've had customers prefer the registry instead due to issues relating to deployment. It depends on who your client base is, and what the person keeping your product working prefers.
I always use regular files because its much easier to develop =)
Simple io vs I don't remember how read/write registry
Simple file copy/paste vs export/import keys for backup/developpement multiple versions of config for testing
Note that all of these advantages also translate into deployment strategies and generic client usage of the configurations
Depends how heavy deployment is. Most of my applications are XCopy-Deployable, that is they don't need an installer and can just be copied/unzipped. So I use .ini Files (using my own INI File Parser as .net has no built in one)
However, if your application needs to be centrally manageable (for example, using Windows Group Policies) or if you have a "heavy" installer anyway, the registry is the prime choice. This is because Applications that are installed normally to to C:\Program Files, and normal users do not have write access to this directory. Sure, there are Alternatives (%APPDATA% or Isolated Storage which has to be used when the Application is a Silverlight app), but you can as well "go with the flow".
Of course, if your application is supposed to run on Mono, you can rule out the Registry anyway and should go Flat Files.