We face code quality issues because of inline mysql queries. Having self-written mysql queries really clutters the code and also increases code base etc.
Our code is cluttered with stuff like
/* beautify ignore:start */
/* jshint ignore:start */
var sql = "SELECT *"
+" ,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate"
+" ,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1"
+" ,count(ps.profile_id) c2"
+" FROM TABLE sc"
+" JOIN "
+" PACKAGE_V psc on sc.id = psc.s_id "
+" JOIN "
+" PACKAGE_SKILL pks on pks.package_id = psc.package_id "
+" LEFT JOIN PROFILE_SKILL ps on ps.skill_id = pks.skill_id and ps.profile_id = ?"
+" WHERE sc.type in "
+" ('a',"
+" 'b',"
+" 'c' ,"
+" 'd',"
+" 'e',"
+" 'f',"
+" 'g',"
+" 'h')"
+" AND sc.status = 'open'"
+" AND sc.crowd_type = ?"
+" AND sc.created_at < DATE_SUB(NOW(),INTERVAL 10 MINUTE) "
+" AND sc.created_at > DATE_SUB(NOW(),INTERVAL 14 DAY)"
+" AND distance_mail(?, ?,lat,lon) < 500"
+" GROUP BY sc.id"
+" HAVING c1 = c2 "
+" ORDER BY distance;";
/* jshint ignore:end */
/* beautify ignore:end */
I had to blur the code a little bit.
As you can see, having this repeatedly in your code is just unreadable. Also because atm we can not go to ES6, which would at least pretty the string a little bit thanks to multi-line strings.
The question now is, is there a way to store that SQL procedures in one place? As additional information, we use node (~0.12) and express to expose an API, accessing a MySQL db.
I already thought about, using a JSON, which will result in an even bigger mess. Plus it may not even be possible since the charset for JSON is a little bit strict and the JSON will probably not like having multi line strings too.
Then I came up with the idea to store the SQL in a file and load at startup of the node app. This is at the moment my best shot to get the SQL queries at ONE place and offering them to the rest of the node modules.
Question here is, use ONE file? Use one file per query? Use one file per database table?
Any help is appreciated, I can not be the first on the planet solving this so maybe someone has a working, nice solution!
PS: I tried using libs like squel but that does not really help, since our queries are complex as you can see. It is mainly about getting OUR queries into a "query central".
I prefer putting every bigger query in one file. This way you can have syntax highlighting and it's easy to load on server start. To structure this, i usually have one folder for all queries and inside that one folder for each model.
# queries/mymodel/select.mymodel.sql
SELECT * FROM mymodel;
// in mymodel.js
const fs = require('fs');
const queries = {
select: fs.readFileSync(__dirname + '/queries/mymodel/select.mymodel.sql', 'utf8')
};
I suggest you store your queries in .sql files away from your js code. This will separate the concerns and make both code & queries much more readable. You should have different directories with nested structure based on your business.
eg:
queries
├── global.sql
├── products
│ └── select.sql
└── users
└── select.sql
Now, you just need to require all these files at application startup. You can either do it manually or use some logic. The code below will read all the files (sync) and produce an object with the same hierarchy as the folder above
var glob = require('glob')
var _ = require('lodash')
var fs = require('fs')
// directory containing all queries (in nested folders)
var queriesDirectory = 'queries'
// get all sql files in dir and sub dirs
var files = glob.sync(queriesDirectory + '/**/*.sql', {})
// create object to store all queries
var queries = {}
_.each(files, function(file){
// 1. read file text
var queryText = fs.readFileSync(__dirname + '/' + file, 'utf8')
// 2. store into object
// create regex for directory name
var directoryNameReg = new RegExp("^" + queriesDirectory + "/")
// get the property path to set in the final object, eg: model.queryName
var queryPath = file
// remove directory name
.replace(directoryNameReg,'')
// remove extension
.replace(/\.sql/,'')
// replace '/' with '.'
.replace(/\//g, '.')
// use lodash to set the nested properties
_.set(queries, queryPath, queryText)
})
// final object with all queries according to nested folder structure
console.log(queries)
log output
{
global: '-- global query if needed\n',
products: {
select: 'select * from products\n'
},
users: {
select: 'select * from users\n'
}
}
so you can access all queries like this queries.users.select
Put your query into database procedure and call procedure in the code, when it is needed.
create procedure sp_query()
select * from table1;
There are a few things you want to do. First, you want to store multi-line without ES6. You can take advantage of toString of a function.
var getComment = function(fx) {
var str = fx.toString();
return str.substring(str.indexOf('/*') + 2, str.indexOf('*/'));
},
queryA = function() {
/*
select blah
from tableA
where whatever = condition
*/
}
console.log(getComment(queryA));
You can now create a module and store lots of these functions. For example:
//Name it something like salesQry.js under the root directory of your node project.
var getComment = function(fx) {
var str = fx.toString();
return str.substring(str.indexOf('/*') + 2, str.indexOf('*/'));
},
query = {};
query.template = getComment(function() { /*Put query here*/ });
query.b = getComment(function() {
/*
SELECT *
,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate
,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1
,count(ps.profile_id) c2
FROM TABLE sc
JOIN PACKAGE_V psc on sc.id = psc.s_id
JOIN PACKAGE_SKILL pks on pks.package_id = psc.package_id
LEFT JOIN PROFILE_SKILL ps on ps.skill_id = pks.skill_id AND ps.profile_id = ?
WHERE sc.type in ('a','b','c','d','e','f','g','h')
AND sc.status = 'open'
AND sc.crowd_type = ?
AND sc.created_at < DATE_SUB(NOW(),INTERVAL 10 MINUTE)
AND sc.created_at > DATE_SUB(NOW(),INTERVAL 14 DAY)
AND distance_mail(?, ?,lat,lon) < 500
GROUP BY sc.id
HAVING c1 = c2
ORDER BY distance;
*/
});
//Debug
console.log(query.template);
console.log(query.b);
//module.exports.query = query //Uncomment this.
You can require the necessary packages and build your logic right in this module or build a generic wrapper module for better OO design.
//Name it something like SQL.js. in the root directory of your node project.
var mysql = require('mysql'),
connection = mysql.createConnection({
host: 'localhost',
user: 'me',
password: 'secret',
database: 'my_db'
});
module.exports.load = function(moduleName) {
var SQL = require(moduleName);
return {
query: function(statement, param, callback) {
connection.connect();
connection.query(SQL[statement], param, function(err, results) {
connection.end();
callback(err, result);
});
}
});
To use it, you do something like:
var Sql = require ('./SQL.js').load('./SalesQry.js');
Sql.query('b', param, function (err, results) {
...
});
I come from different platform, so I'm not sure if this is what you are looking for. like your application, we had many template queries and we don't like having it hard-coded in the application.
We created a table in MySQL, allowing to save Template_Name (unique), Template_SQL.
We then wrote a small function within our application that returns the SQL template.
something like this:
SQL = fn_get_template_sql(Template_name);
we then process the SQL something like this:
pseudo:
if SQL is not empty
SQL = replace all parameters// use escape mysql strings from your parameter
execute the SQL
or you could read the SQL, create connection and add parameters using your safest way.
This allows you to edit the template query where and whenever. You can create an audit table for the template table capturing all previous changes to revert back to previous template if needed. You can extend the table and capture who and when was the SQL last edited.
from performance point of view, this would work as on-the-fly plus you don't have to read any files or restart server when you are depending on starting-server process when adding new templates.
You could create a completely new npm module let's assume the custom-queries module and put all your complex queries in there.
Then you can categorize all your queries by resource and by action. For example, the dir structure can be:
/index.js -> it will bootstrap all the resources
/queries
/queries/sc (random name)
/queries/psc (random name)
/queries/complex (random name)
The following query can live under the /queries/complex directory in its own file and the file will have a descriptive name (let's assume retrieveDistance)
// You can define some placeholders within this var because possibly you would like to be a bit configurable and reuseable in different parts of your code.
/* jshint ignore:start */
var sql = "SELECT *"
+" ,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate"
+" ,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1"
+" ,count(ps.profile_id) c2"
+" FROM TABLE sc"
+" JOIN "
+" PACKAGE_V psc on sc.id = psc.s_id "
+" JOIN "
+" PACKAGE_SKILL pks on pks.package_id = psc.package_id "
+" LEFT JOIN PROFILE_SKILL ps on ps.skill_id = pks.skill_id and ps.profile_id = ?"
+" WHERE sc.type in "
+" ('a',"
+" 'b',"
+" 'c' ,"
+" 'd',"
+" 'e',"
+" 'f',"
+" 'g',"
+" 'h')"
+" AND sc.status = 'open'"
+" AND sc.crowd_type = ?"
+" AND sc.created_at < DATE_SUB(NOW(),INTERVAL 10 MINUTE) "
+" AND sc.created_at > DATE_SUB(NOW(),INTERVAL 14 DAY)"
+" AND distance_mail(?, ?,lat,lon) < 500"
+" GROUP BY sc.id"
+" HAVING c1 = c2 "
+" ORDER BY distance;";
/* jshint ignore:end */
module.exports = sql;
The top level index.js will export an object with all the complex queries. An example can be:
var sc = require('./queries/sc');
var psc = require('./queries/psc');
var complex = require('./queries/complex');
// Quite important because you want to ensure that no one will touch the queries outside of
// the scope of this module. Be careful, because the Object.freeze is freezing only the top
// level elements of the object and it is not recursively freezing the nested objects.
var queries = Object.freeze({
sc: sc,
psc: psc,
complex: complex
});
module.exports = queries;
Finally, on your main code you can use the module like that:
var cq = require('custom-queries');
var retrieveDistanceQuery = cq.complex.retrieveDistance;
// #todo: replace the placeholders if they exist
Doing something like that you will move all the noise of the string concatenation to another place that you would expect and you will be able to find quite easily in one place all your complex queries.
This is no doubt a million dollar question, and I think the right solution depends always on the case.
Here goes my thoughts. Hope could help:
One simple trick (which, in fact, I read that it is surprisingly more efficient than joining strings with "+") is to use arrays of strings for each row and join them.
It continues being a mess but, at least for me, a bit clearer (specially when using, as I do, "\n" as separator instead of spaces, to make resulting strings more readable when printed out for debugging).
Example:
var sql = [
"select foo.bar",
"from baz",
"join foo on (",
" foo.bazId = baz.id",
")", // I always leave the last comma to avoid errors on possible query grow.
].join("\n"); // or .join(" ") if you prefer.
As a hint, I use that syntax in my own SQL "building" library. It may not work in too complex queries but, if you have cases in which provided parameters could vary, it is very helpful to avoid (also subotptimal) "coalesce" messes by fully removing unneeded query parts. It is also on GitHub, (and it isn't too complex code), so you can extend it if you feel it useful.
If you prefer separate files:
About having single or multiple files, having multiple files is less efficient from the point of view of reading efficiency (more file open/close overhead and harder OS level caching). But, if you load all of them single time at startup, it is not in fact a hardly noticeable difference.
So, the only drawback (for me) is that it is too hard to have a "global glance" of your query collection. Even, if you have very huge amount of queries, I think it is better to mix both approaches. That is: group related queries in the same file so you have single file per each module, submodel or whatever criteria you chosen.
Of course: Single file would result in relatively "huge" file, also difficult to handle "at first". But I (hardly) use vim's marker based folding (foldmethod=marker) which is very helpfull to handle that files.
Of course: if you don't (yet) use vim (truly??), you wouldn't have that option, but sure there is another alternative in your editor. If not, you always can use syntax folding and something like "function (my_tag) {" as markers.
For example:
---(Query 1)---------------------/*{{{*/
select foo from bar;
---------------------------------/*}}}*/
---(Query 2)---------------------/*{{{*/
select foo.baz
from foo
join bar using (foobar)
---------------------------------/*}}}*/
...when folded, I see it as:
+-- 3 línies: ---(Query 1)------------------------------------------------
+-- 5 línies: ---(Query 2)------------------------------------------------
Which, using properly selected labels, is much more handy to manage and, from the parsing point of view, is not difficult to parse the whole file splitting queries by that separation rows and using labels as keys to index the queries.
Dirty example:
#!/usr/bin/env node
"use strict";
var Fs = require("fs");
var src = Fs.readFileSync("./test.sql");
var queries = {};
var label = false;
String(src).split("\n").map(function(row){
var m = row.match(/^-+\((.*?)\)-+[/*{]*$/);
if (m) return queries[label = m[1].replace(" ", "_").toLowerCase()] = "";
if(row.match(/^-+[/*}]*$/)) return label = false;
if (label) queries[label] += row+"\n";
});
console.log(queries);
// { query_1: 'select foo from bar;\n',
// query_2: 'select foo.baz \nfrom foo\njoin bar using (foobar)\n' }
console.log(queries["query_1"]);
// select foo from bar;
console.log(queries["query_2"]);
// select foo.baz
// from foo
// join bar using (foobar)
Finally (idea), if you do as much effort, wouldn't be a bad idea to add some boolean mark together with each query label telling if that query is intended to be used frequently or only occasionally. Then you can use that information to prepare those statements at application startup or only when they are going to be used more than single time.
Can you create a view which that query.
Then select from the view
I don't see any parameters in the query so I suppose view creation is possible.
Create store procedures for all queries, and replace the var sql = "SELECT..." for calling the procedures like var sql = "CALL usp_get_packages".
This is the best for performance and no dependency breaks on the application. Depending on the number of queries may be a huge task, but for every aspect (maintainability, performance, dependencies, etc) is the best solution.
I'm late to the party, but if you want to store related queries in a single file, YAML is a good fit because it handles arbitrary whitespace better than pretty much any other data serialization format, and it has some other nice features like comments:
someQuery: |-
SELECT *
,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate
,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1
,count(ps.profile_id) c2
FROM TABLE sc
-- ...
# Here's a comment explaining the following query
someOtherQuery: |-
SELECT 1;
This way, using a module like js-yaml you can easily load all of the queries into an object at startup and access each by a sensible name:
const fs = require('fs');
const jsyaml = require('js-yaml');
export default jsyaml.load(fs.readFileSync('queries.yml'));
Here's a snippet of it in action (using a template string instead of a file):
const yml =
`someQuery: |-
SELECT *
FROM TABLE sc;
someOtherQuery: |-
SELECT 1;`;
const queries = jsyaml.load(yml);
console.dir(queries);
console.log(queries.someQuery);
<script src="https://unpkg.com/js-yaml#3.8.1/dist/js-yaml.min.js"></script>
Another approach with separate files by using ES6 string templates.
Of course, this doesn't answer the original question because it requires ES6, but there is already an accepted answer which I'm not intending to replace. I simply thought that it is interesting from the point of view of the discussion about query storage and management alternatives.
// myQuery.sql.js
"use strict";
var p = module.parent;
var someVar = p ? '$1' : ':someVar'; // Comments if needed...
var someOtherVar = p ? '$2' : ':someOtherVar';
module.exports = `
--##sql##
select foo from bar
where x = ${someVar} and y = ${someOtherVar}
--##/sql##
`;
module.parent || console.log(module.exports);
// (or simply "p || console.log(module.exports);")
NOTE: This is the original (basic) approach. I
later evolved it adding some interesting improvements
(BONUS, BONUS 2 and FINAL EDIT sections). See the bottom of
this post for a full-featured snipet.
The advantages of this approach are:
Is very readable, even the little javascript overhead.
It also can be properly syntax higlighted (at least in Vim) both javascript and SQL sections.
Parameters are placed as readable variable names instead of silly "$1, $2", etc... and explicitly declared at the top of the file so it's simple to check in which order they must be provided.
Can be required as myQuery = require("path/to/myQuery.sql.js") obtaining valid query string with $1, $2, etc... positional parameters in the specified order.
But, also, can be directly executed with node path/to/myQuery.sql.js obtaining valid SQL to be executed in a sql interpreter
This way you can avoid the mess of copying forth and back the query and replace parameter specification (or values) each time from query testing environments to application code: Simply use the same file.
Note: I used PostgreSQL syntax for variable names. But with other databases, if different, it's pretty simple to adapt.
More than that: with a few more tweaks (see BONUS section), you can turn it in a viable console testing tool and:
Generate yet parametized sql by executing something like node myQueryFile.sql.js parameter1 parameter2 [...].
...or directly execute it by piping to your database console. Ex: node myQueryFile.sql.js some_parameter | psql -U myUser -h db_host db_name.
Even more: You also can tweak the query making it to behave slightly different when executed from console (see BONUS 2 section) avoiding to waste space displaying large but no meaningful data while keeping it when the query is read by the application that needs it.
And, of course: you can pipe it again to less -S to avoid line wrapping and be able to easily explore data by scrolling it both in horizontal and vertical directions.
Example:
(
echo "\set someVar 3"
echo "\set someOtherVar 'foo'"
node path/to/myQuery.sql.js
) | psql dbName
NOTES:
'##sql##' and '##/sql##' (or similar) labels are fully optional,
but very useful for proper syntax highlighting, at least in Vim.
This extra-plumbing is no more necessary (see BONUS section).
In fact, I actually doesn't write below (...) | psql... code directly to console but simply (in a vim buffer):
echo "\set someVar 3"
echo "\set someOtherVar 'foo'"
node path/to/myQuery.sql.js
...as many times as test conditions I want to test and execute them by visually selecting desired block and typing :!bash | psql ...
BONUS: (edit)
I ended up using this approach in many projects with just a simple modification that consist in changing last row(s):
module.parent || console.log(module.exports);
// (or simply "p || console.log(module.exports);")
...by:
p || console.log(
`
\\set someVar '''${process.argv[2]}'''
\\set someOtherVar '''${process.argv[3]}'''
`
+ module.exports
);
This way I can generate yet parametized queries from command line just by passing parameters normally as position arguments. Example:
myUser#myHost:~$ node myQuery.sql.js foo bar
\set someVar '''foo'''
\set someOtherVar '''bar'''
--##sql##
select foo from bar
where x = ${someVar} and y = ${someOtherVar}
--##/sql##
...and, better than that: I can pipe it to postgres (or any other database) console just like this:
myUser#myHost:~$ node myQuery.sql.js foo bar | psql -h dbHost -u dbUser dbName
foo
------
100
200
300
(3 rows)
This approach make it much more easy to test multiple values because you can simply use command line history to recover previous commands and just edit whatever you want.
BONUS 2:
Two few more tricks:
1. Sometimes we need to retrieve some columns with binary and/or large data that make it difficult to read from console and, in fact, we probaby even don't need to see them at all while testing the query.
In this cases we can take advantadge of the p variable to alter the output of the query and shorten, format more properly, or simply remove that column from the projection.
Examples:
Format: ${p ? jsonb_column : "jsonb_pretty("+jsonb_column+")"},
Shorten: ${p ? long_text : "substring("+long_text+")"},
Remove: ${p ? binary_data + "," : "" (notice that, in this case, I moved the comma inside the exprssion due to be able to avoid it in console version.
2. Not a trick in fact but just a reminder: We all know that to deal with large output in the console, we only need to pipe it to less command.
But, at least me, often forgive that, when ouput is table-aligned and too wide to fit in our terminal, there is the -S modifier to instruct less not to wrap and instead let us scroll text also in horizontal direction to explore the data.
Here full version of the original snipped with this change applied:
// myQuery.sql.js
"use strict";
var p = module.parent;
var someVar = p ? '$1' : ':someVar'; // Comments if needed...
var someOtherVar = p ? '$2' : ':someOtherVar';
module.exports = `
--##sql##
select
foo
, bar
, ${p ? baz : "jsonb_pretty("+baz+")"}
${p ? ", " + long_hash : ""}
from bar
where x = ${someVar} and y = ${someOtherVar}
--##/sql##
`;
p || console.log(
`
\\set someVar '''${process.argv[2]}'''
\\set someOtherVar '''${process.argv[3]}'''
`
+ module.exports
);
FINAL EDIT:
I have been evolving a lot more this concept until it became too wide to be strictly manually handled approach.
Finally, taking advantage of the great ES6+ Tagged Templates i implemented a much simpler library driven approach.
So, in case anyone could be interested in it, here it is: SQLTT
Call procedure in the code after putting query into the db procedure. #paval also already answered
you may also refer here.
create procedure sp_query()
select * from table1;
I have a number of separate text files which i would like to import into an SQL database. The data is not comma separted so that rules out using my idea of importing data by comma. However, the data is across a number of rows. See example text file below. Please could anyone advise how i could import specific data such as the programmed and mean values, shift number, etc?
It looks like you have a machine-generated report. The ideal approach is to have that machine produce a different report--one that has no '/////' or any of that crap, just the data you want to import. So that new report's output might look like this.
shift_num, prog_min, mean_sec, att_sec, adt_min
1, 600, 599, 658, 210
...
In practice, though, it's often not "possible" to get reports like that. (That is, it's always possible for the machine to do it, but often humans are unwilling.) When that happens, use your favorite text-processing language to turn the report into usable data.
I like awk for this kind of stuff. Others like perl.
To illustrate, I keyed in this replica of your report. (Saved as test.dat.)
ORDER Nr FG68909 Q.ty Ordered 99
...
SHIFT Nr. 1
////////
PROGRAMMED MEAN
600 min JOB TIME 599 sec
AVERAGE Turnaround Time 658 sec
AVERAGE Delivery Time 210 mins
Then I wrote this awk program. It makes a lot of assumptions about the layout of your report. Some of them will probably fail on real data.
/SHIFT/ {shift = $NF}
/JOB TIME/ {
programmed = sprintf("%d %s", $1, $2);
mean = sprintf("%d %s", $(NF-1), $NF);
}
/AVERAGE Turnaround/ { avg_turnaround = sprintf("%d %s", $(NF-1), $NF);}
# Assumes the line "AVERAGE Delivery" is also the end of the record.
/AVERAGE Delivery/ {
avg_delivery = sprintf("%d %s", $(NF-1), $NF);
printf("%d, '%s', '%s', '%s', '%s'\n", shift, programmed, mean, avg_turnaround, avg_delivery);
# Clear the vars for the next record.
shift = "";
programmed = "";
mean = "";
avg_turnaround = "";
avg_delivery = "";
}
The output . . .
$ awk -f test.awk test.dat
1, '600 min', '599 sec', '658 sec', '210 mins'
You could write a simple application in C# to parse the contents of the file using regex, turn it into one line, and insert semicolons where required.
I have something like 40 million TIFF documents, all 1-bit single page duplex. In about 40% of cases, the back image of these TIFFs is 'blank' and I'd like to remove them before I do a load to a CMS to reduce space requirements.
Is there a simple method to look at the data content of each page and delete it if it falls under a preset threshold, say 2% 'black'?
I'm technology agnostic on this one, but a C# solution would probably be the easiest to support. Problem is, I've no image manipulation experience so don't really know where to start.
Edit to add: The images are old scans and so are 'dirty', so this is not expected to be an exact science. The threshold would need to be set to avoid the chance of false positives.
You probably should:
open each image
iterate through its pages (using Bitmap.GetFrameCount / Bitmap.SelectActiveFrame methods)
access bits of each page (using Bitmap.LockBits method)
analyze contents of each page (simple loop)
if contents is worthwhile then copy data to another image (Bitmap.LockBits and a loop)
This task isn't particularly complex but will require some code to be written. This site contains some samples that you may search for using method names as keywords).
P.S. I assume that all of images can be successfully loaded into a System.Drawing.Bitmap.
You can do something like that with DotImage (disclaimer, I work for Atalasoft and have written most of the underlying classes that you'd be using). The code to do it will look something like this:
public void RemoveBlankPages(Stream source stm)
{
List<int> blanks = new List<int>();
if (GetBlankPages(stm, blanks)) {
// all pages blank - delete file? Skip? Your choice.
}
else {
// memory stream is convenient - maybe a temp file instead?
using (MemoryStream ostm = new MemoryStream()) {
// pulls out all the blanks and writes to the temp stream
stm.Seek(0, SeekOrigin.Begin);
RemoveBlanks(blanks, stm, ostm);
CopyStream(ostm, stm); // copies first stm to second, truncating at end
}
}
}
private bool GetBlankPages(Stream stm, List<int> blanks)
{
TiffDecoder decoder = new TiffDecoder();
ImageInfo info = decoder.GetImageInfo(stm);
for (int i=0; i < info.FrameCount; i++) {
try {
stm.Seek(0, SeekOrigin.Begin);
using (AtalaImage image = decoder.Read(stm, i, null)) {
if (IsBlankPage(image)) blanks.Add(i);
}
}
catch {
// bad file - skip? could also try to remove the bad page:
blanks.Add(i);
}
}
return blanks.Count == info.FrameCount;
}
private bool IsBlankPage(AtalaImage image)
{
// you might want to configure the command to do noise removal and black border
// removal (or not) first.
BlankPageDetectionCommand command = new BlankPageDetectionCommand();
BlankPageDetectionResults results = command.Apply(image) as BlankPageDetectionResults;
return results.IsImageBlank;
}
private void RemoveBlanks(List<int> blanks, Stream source, Stream dest)
{
// blanks needs to be sorted low to high, which it will be if generated from
// above
TiffDocument doc = new TiffDocument(source);
int totalRemoved = 0;
foreach (int page in blanks) {
doc.Pages.RemoveAt(page - totalRemoved);
totalRemoved++;
}
doc.Save(dest);
}
You should note that blank page detection is not as simple as "are all the pixels white(-ish)?" since scanning introduces all kinds of interesting artifacts. To get the BlankPageDetectionCommand, you would need the Document Imaging package.
Are you interested in shrinking the files or just want to avoid people wasting their time viewing blank pages? You can do a quick and dirty edit of the files to rid yourself of known blank pages by just patching the second IFD to be 0x00000000. Here's what I mean - TIFF files have a simple layout if you're just navigating through the pages:
TIFF Header (4 bytes)
First IFD offset (4 bytes - typically points to 0x00000008)
IFD:
Number of tags (2-bytes)
{individual TIFF tags} (12-bytes each)
Next IFD offset (4 bytes)
Just patch the "next IFD offset" to a value of 0x00000000 to "unlink" pages beyond the current one.