Node.JS String encoding issues - mysql

I'm writing a sort of interface between a TCP Chat server and an SQL Server, and while working on a part where a user submits a value and is assigned a named pulled from a row in this DB with this value. When querying this DB with a value from a telnet shell, I can't get any results from the DB. But I can when I perform the same query in adminer/MySQL # BASH etc...
My thoughts are that it has come down to an encoding issue. Being rather noobish to node, I don't really know what to do. I do have a pretty good experience with JavaScript, just not node.
Code
function setCliNameOrKick(client, key){
key = String(key).replace(/\n\r\b\\\s/gi, "");
var q = "SELECT username FROM webusers WHERE lic = \'"+String(key).toString()+"\'; --";
console.log(key);
query(q);
cli.query(q, function cb(e, r, f){
if(client != null){
console.log(r);
if(r.length >= 1){
client.name = r['username'];
}else{
client.stream.end();
}
}else{
console.log("Was Passed A Null Client!");
}
});
}
That comes from the DB query tool
It takes input from a string sent by the client on connect, alongside an object representing the client
stream.addListener("data", function(data){
if(client.name == null){
data = String(data).replace(new RegExp("[\n]+", "g"), "");
cNameBuff = cNameBuff + data;
if(cNameBuff.length > 1){ //Min Length
//client.name = cNameB;
db.set(client, cNameBuff);
onAuth(client);
}
return;
}
data = String(data);
if(data.length >= 2){
srv.procChat(client, data);
}
});

Related

Multiple APIs are called with (change) in angular

On selecting any date and hitting enter an API call should be made. And there's a x icon in the input on clicking it, it should call the API with date 01/01/12 Also this has feature like if you type 2/3 and hit enter it will automatically make it 02/03/20. The problem is if the input is empty and if I hit Enter same API calls are made thrice.
But the feature should be like if you select a date then without hitting Enter an API call should be made. I can't just use change function because if 2/3 is typed and Tab is pressed then it will not adjust the date automatically and also multiple API calls on hitting Enter. Is there a way to stop multiple API calls?
(change)="startDate($event)" (keydown.enter)="CallAPI($event)"
startDate(event) {
if (event.target.value == '' || event.target.value == null)
this.cutoverFilterApi(event)
}
CallAPI(event) {
let data = event.target.value;
if (data != '' && data != null && data != "NaN/NaN/NaN") {
data = data;
} else {
data = "01/01/12";
}
this.httpService.getData('PATH' + data).subscribe((response: any) => {
this.dateChangeData = response.results;
this.rowData = response.results;
this.gridApi.setRowData(this.rowData);
});
}
You could keep the last valid value and avoid request if it is the same.
Something like this,
lastDate = null; // <- variable to keep last value
CallAPI(event) {
let data = event.target.value;
if (data != '' && data != null && data != "NaN/NaN/NaN") {
data = data;
} else {
data = "01/01/12";
}
// check if data is not the same as last request
if (this.lastDate === data) {
return;
}
this.lastDate = data; // <- update new request date
this.httpService.getData('PATH' + data).subscribe((response: any) => {
this.dateChangeData = response.results;
this.rowData = response.results;
this.gridApi.setRowData(this.rowData);
});
}
You can use this
(dateInput)="addEvent('input', $event)" (dateChange)="addEvent('change', $event)"
instead of
(change)="startDate($event)" (keydown.enter)="CallAPI($event)"
I have an example of angular material datepicker, which will make your code easier.
Reference link
I hope it is helpful for you. :)

MySQL Query crashes my program even when in try statement

It errors out when the record couldn't be found Cannot read property 'id' of undefined
How can I keep it from crashing out and handle "undefined"?
let blacklisted = false;
let conStr = "SELECT * FROM `blacklist` WHERE `id` = '"+message.author.id+"'";
con.query(conStr, function(error, result, field) {
console.log(result[0].id);
if(result[0].id){
console.log("Van")
blacklisted = false;
}
});
if (message.author.id !== "397487086522990602" && blacklisted){/*Actual Code*/}
I believe you should add proper checks. Try catch is basically for connection/ query execution related failure not for if data not returned.
So just add check if result[0] exists.
Hence folowing code should serve your purpose:
let blacklisted = false;
let conStr = "SELECT * FROM `blacklist` WHERE `id` = '"+message.author.id+"'";
con.query(conStr, function(error, result, field) {
if(result[0] && result[0].id){ // add check for result[0]
console.log("Van")
blacklisted = false;
}
});
if (message.author.id !== "397487086522990602" && blacklisted){/*Actual Code*/}

firebase update by batches does not work with large dataset

I want to populate a feed to almost one million of users upon a content posted by a user with high number of followers using GCP cloud functions.
In order to do this, I am designing to split the firebase update of the feed into numbers of small batches. That's because I think if I dont split the update, I might face the following issues:
i) keeping one million of users feed in memory will exceed the allocated maximum 2GB memory.
ii) update one million of entries at one go will not work (How long it takes to update one million entries?)
However, the batch update only works for me when the batch only inserting around 100 entries per update invocation. When I tried with 1000 per batch, only the 1st batch was inserted. I wonder if this is due to:
i) time-out ? however I dont see this error in the log.
ii) The array variable , userFeeds{} , keeping the batch is destroyed when the function is out of scope ?
Below is my code:
var admin = require('firebase-admin');
var spark = require('./spark');
var user = require('./user');
var Promise = require('promise');
var sparkRecord;
exports.newSpark = function (sparkID) {
var getSparkPromise = spark.getSpark(sparkID);
Promise.all([getSparkPromise]).then(function(result) {
var userSpark = result[0];
sparkRecord = userSpark;
sparkRecord.sparkID = sparkID;
// the batch update only works if the entries per batch is aroud 100 instead of 1000
populateFeedsToFollowers(sparkRecord.uidFrom, 100, null, myCallback);
});
};
var populateFeedsToFollowers = function(uid, fetchSize, startKey, callBack){
var fetchCount = 0;
//retrieving only follower list by batch
user.setFetchLimit(fetchSize);
user.setStartKey(startKey);
//I use this array variable to keep the entries by batch
var userFeeds = {};
user.getFollowersByBatch(uid).then(function(users){
if(users == null){
callBack(null, null, null);
return;
}
//looping thru the followers by batch size
Object.keys(users).forEach(function(userKey) {
fetchCount += 1;
if(fetchCount > fetchSize){
// updating users feed by batch
admin.database().ref().update(userFeeds);
callBack(null, userKey);
fetchCount = 0;
return;
}else{
userFeeds['/userFeed/' + userKey + '/' + sparkRecord.sparkID] = {
phase:sparkRecord.phase,
postTimeIntervalSince1970:sparkRecord.postTimeIntervalSince1970
}
}
});//Object.keys(users).forEach
if(fetchCount > 0){
admin.database().ref().update(userFeeds);
}
});//user.getFollowersByBatch
};
var myCallback = function(err, nextKey) {
if (err) throw err; // Check for the error and throw if it exists.
if(nextKey != null){ //if having remaining followers, keep populating
populateFeedsToFollowers(sparkRecord.uidFrom, 100, nextKey, myCallback);
}
};

Refactor non-blocking nodejs do..while loop

I'm writing an api in node.js. The first webservice endpoint - /create - creates a new db entry with a randomised 6-character hash, much like a bit.ly hash.
Having done something similar in PHP, I've written a do..while loop which generates a random string and checks my mysql db (using node-mysql) to make sure it's free. I've also got a counter in there, so I can fail after x iterations if need be.
var i = 0;
var alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'];
var hash = null;
var success = false;
do {
// generate a random hash by shuffling the alphabet,
// joining it and getting 6 chars
hash = alphabet.sort(function(){
return 0.5 - Math.random();
}).join('').substr(0,6);
console.log(i + ': checking hash ' + hash);
// see if it exists in the db
db.query("SELECT hash FROM trips WHERE hash = " + hash, function(err, results){
if(results.length == 0) {
// the hash is free to use :)
success = true;
} else {
// the hash is already taken :(
success = false;
}
});
// increment the counter
i++;
} while(success === false && i < 10);
I currently only have one hash in my db (abcdef), but the loop is getting to ten and failing because it thinks each new hash is already present.
I'm pretty sure this is because of the non-blocking nature of node.js. This is obviously A Good Thing, but in my case I need the loop to block until the query has returned.
I'm pretty sure I could hack this by doing something like:
var q = db.query(...);
But I know that's throwing away a major feature of node.js.
Is there a code pattern for this sort of need?
I'm pretty sure this is because of the non-blocking nature of node.js.
Yes.
This is obviously A Good Thing, but in my case I need the loop to block until the query has returned.
No, you most certainly don't want to do that.
Embrace the asynchronous approcach. Work with call-backs:
function generateHash(onSuccess, onError, retryCount) {
// generate a random hash by shuffling the alphabet,
// joining it and getting 6 chars
var hash = alphabet.sort(function(){
return 0.5 - Math.random();
}).join('').substr(0,6);
// see if it exists in the db
db.query(
"SELECT hash FROM trips WHERE hash = '" + hash + "'",
function(err, results){
if (results.length == 0) {
// the hash is free to use :)
onSuccess(hash);
} else {
// the hash is already taken :(
if (retryCount > 1) {
generateHash(onSuccess, onError, retryCount - 1);
} else {
onError();
}
}
}
});
}
generateHash(
function(hash) { console.log('Success! New hash created: ' + hash); },
function() { console.log('Error! retry limit reached'); },
6
);
var i=0;
function generateHash(callback) {
// generate a random hash by shuffling the alphabet,
// joining it and getting 6 chars
hash = alphabet.sort(function(){
return 0.5 - Math.random();
}).join('').substr(0,6);
console.log(i + ': checking hash ' + hash);
// see if it exists in the db
db.query("SELECT hash FROM trips WHERE hash = " + hash, function(err, results){
if(results.length == 0) {
// the hash is free to use :)
callback(null, hash);
} else {
// increment the counter
i++;
if (i < 10)
generateHash(callback); //another attempt
else
callback('error'); // return result
}
});
}

Entity Framework SaveChanges function won't commit my changes

I have an initial selection which I place into list. I use the list to loop through each record and where it meets certain criteria I run trough a series of inserts, deletes and updates. Finally call the SaveChanges() method to commit changes.
The code runs through without raising an exception but no changes reflect in the database. I have been searching the web with no luck.
I'm using VS2008 with SQL2008 backend.
Please help?
using (SMSEntities db = new SMSEntities())
{
try
{
//Get SMS's to send from Inbox
List<Inbox> tmpInbox = (from c in db.Inboxes where c.Status != "NEW" && c.Status != "SUCCESS" select c).ToList();// new { Inbox.InboxID, Inbox.StatusTrackingID, Inbox.Status, Inbox.NoOfAttempts, Inbox.CellNo, Inbox.SourceCellNo, Inbox.Header, Inbox.Message, Inbox.MessageDate, Inbox.AccountID, Inbox.LastAttemptDate }).ToList();
foreach (Inbox tmpInboxIndex in tmpInbox)
{
bool success = false;
//Check status here
string SentStatus = CheckSMSSentToProvider(tmpInboxIndex.StatusTrackingID);
// Define a transaction scope for the operations.
using (TransactionScope transaction = new TransactionScope())
{
try
{
if ((SentStatus == "DELIVERED") || (SentStatus == "NOTFOUND") || (SentStatus == "DELETED") || (SentStatus == "REJECTED") || (SentStatus == "UNDELIVERED"))
{
//Insert the Log row
Log newLog = new Log();
newLog.InboxID = tmpInboxIndex.InboxID;
newLog.CellNo = tmpInboxIndex.CellNo;
newLog.SourceCellNo = tmpInboxIndex.SourceCellNo;
newLog.Message = tmpInboxIndex.Message;
newLog.Header = tmpInboxIndex.Header;
newLog.MessageDate = tmpInboxIndex.MessageDate;
newLog.AccountID = tmpInboxIndex.AccountID;
newLog.ProcessedDate = DateTime.Now;
newLog.Status = tmpInboxIndex.Status;
newLog.StatusTrackingID = tmpInboxIndex.StatusTrackingID;
newLog.NoOfAttempts = tmpInboxIndex.NoOfAttempts;
newLog.LastAttemptDate = tmpInboxIndex.LastAttemptDate;
db.Logs.AddObject(newLog);
//Delete the Inbox row
if (tmpInbox != null)
{
var deleteInbox = (from c in db.Inboxes where c.InboxID == tmpInboxIndex.InboxID select c).FirstOrDefault();
if (deleteInbox != null)
{
db.DeleteObject(deleteInbox);
//db.SaveChanges(SaveOptions.DetectChangesBeforeSave);
}
}
}
else
{
//Update inbox status
var tmpUpdateInbox = (from c in db.Inboxes where c.InboxID == tmpInboxIndex.InboxID select c).FirstOrDefault();
tmpUpdateInbox.Status = SentStatus;
tmpUpdateInbox.NoOfAttempts = tmpInboxIndex.NoOfAttempts + 1;
tmpUpdateInbox.LastAttemptDate = DateTime.Now;
//db.SaveChanges(SaveOptions.DetectChangesBeforeSave);
}
// Mark the transaction as complete.
transaction.Complete();
success = true;
//break;
}
catch (Exception ex)
{
// Handle errors and deadlocks here and retry if needed.
// Allow an UpdateException to pass through and
// retry, otherwise stop the execution.
if (ex.GetType() != typeof(UpdateException))
{
Console.WriteLine("An error occured. "
+ "The operation cannot be retried."
+ ex.Message);
break;
}
// If we get to this point, the operation will be retried.
}
}
if (success)
{
// Reset the context since the operation succeeded.
//db.AcceptAllChanges();
db.SaveChanges();
}
}
// Dispose the object context.
db.Dispose();
}
catch (Exception exp)
{
throw new Exception("ERROR - " + exp.Message.ToString(), exp);
}
}
return true;
Regards,
GPR.
Are you using a local database file? You may be looking for changes in the wrong place. By default, when the program starts, VS copies the database file into the debug or release folder. Then the program runs and changes are made, and saved, to the file in the debug or release folder. The program ends, and when you look at the database in your source folder it looks the same. You can change the connection string in the app.config to use an absolute path to avoid this.
See http://blogs.msdn.com/b/smartclientdata/archive/2005/08/26/456886.aspx for more info
The TransactionScope is useless if you do not put the call to SaveChanges into it.
Either move the call to SaveChanges into it or remove the TransactionScope completely.