I am trying to figure out how i can let MS Access use a field value that is 3 rows lower.
The data is from an external source which retrieves SNMP data every week. I linked a table in Access to the txt output file.
Here is a sample:
| Device | IP Address | Uptime | SNMP Custom |
--------------------------------------------------
| Router | 192.168.. | 1 day, 1h | IOS version |
Now when i want to get more information of the devices, Cisco descided it was needed to add new lines to the output file so now the linked table looks like:
| Device | IP Address | Uptime | SNMP Custom | SNMP Custom 2
-----------------------------------------------------------------
| Router | 192.168.. | 1 day, 1h | IOS version |
| Technical Support: sometext
| Copyright (c) sometext
| Compiled | ABCD
Now those 4 lines are from 1 device and the ABCD should be in the SNMP Custom 2 field. The exessive rows i can simply delete but i have no idea how to move the ABCD value to the SNMP Custom 2 field.
Can this be done using MS Access(VB?) or classic ASP? Any thoughts are greatly appreciated.
Thanks in advance
If I've understood your OP correctly, try try the following in an Access query:
UPDATE myTable SET [SNMP Custom 2] = [IP Address], [IP Address] = "" WHERE [Device] = "Compiled"
Related
I am trying to build one giant schema that makes data users to query easier, in order to achieve that, streaming events have to be joined with User Metadata by USER_ID and ID. In data engineering, This operation is called "Data Enrichment" right? the tables below are the example.
# `Event` (Stream)
+---------+--------------+---------------------+
| UERR_ID | EVENT | TIMESTAMP |
+---------+--------------+---------------------+
| 1 | page_view | 2020-04-10T12:00:11 |
| 2 | button_click | 2020-04-10T12:01:23 |
| 3 | page_view | 2020-04-10T12:01:44 |
+---------+--------------+---------------------+
# `User Metadata` (Static)
+----+-------+--------+
| ID | NAME | GENDER |
+----+-------+--------+
| 1 | Matt | MALE |
| 2 | John | MALE |
| 3 | Alice | FEMALE |
+----+-------+--------+
==> # Result
+---------+--------------+---------------------+-------+--------+
| UERR_ID | EVENT | TIMESTAMP | NAME | GENDER |
+---------+--------------+---------------------+-------+--------+
| 1 | page_view | 2020-04-10T12:00:11 | Matt | MALE |
| 2 | button_click | 2020-04-10T12:01:23 | John | MALE |
| 3 | page_view | 2020-04-10T12:01:44 | Alice | FEMALE |
+---------+--------------+---------------------+-------+--------+
I was developing this using Spark, and User Metadata is stored in MySQL, then I realized it would be waste of parallelism of Spark if the spark code includes joining with MySQL tables right?
The bottleneck will be happening on MySQL if traffic will be increased I guess..
Should I store those table to key-value store and update it periodically?
Can you give me some idea to tackle this problem? How you usually handle this type of operations?
Solution 1 :
As you suggested you can keep a local cache copy of in key-value pair on your local and updated the cache as regular interval.
Solution 2 :
You can use a MySql to Kafka Connector as below,
https://debezium.io/documentation/reference/1.1/connectors/mysql.html
For every DML or table alter operations on your User Metadata Table there will be a respective event fired to a Kafka topic (e.g. db_events). You can run a thread in parallel in your Spark streaming job which polls db_events and updates your local cache key-value.
This solution would make your application a near-real time application in true sense.
One over head I can see is that there will be need to run a Kafka Connect service with Mysql Connector (i.e. Debezium) as a plugin.
I'm looking for some guidance in the best way to store user specific data in an SQL database. I'm a little new to SQL so I'm hoping this is a fairly easy concept for those familiar.
I've been reading about normalisation and other good practices as I'm aware that setting a good foundation for the database is crucial and hard to change later.
I think an easy way to explain my scenario is this:
Each website user can choose to create one or more "projects".
Within each project a user will set an "object". This object can be created by the user or it can be chosen from a list of objects which have been created by other users.
Each object has a variable number of settings. Let's say an object could have between 5 - 25 settings. Each setting could simply be an integer value between 0 - 100.
Originally I thought about doing it this way:
Project Table
+-----------+-------------+------+---------+----------+---------+----------+------+--------+
| ProjectID | ProjectName | User | Object1 | Object2 | SetID | Notes | Date | Photo |
+-----------+-------------+------+---------+----------+---------+----------+------+--------+
| PID0001 | My Project | Bob | OBJ0001 | OBJ00056 | SID0045 | my notes | | |
+-----------+-------------+------+---------+----------+---------+----------+------+--------+
Each user can create a project and reference different objects and object settings profiles within that project.
Object Table
+---------+------------+--------+---------+-------+--------+----------+-------+-------+---------+---------+--------+
| ObjID | ObjName | ObjVer | Date | User | Set1ID | Set1Name | Set1X | Set1Y | Set1Min | Set1Max | Set2ID |
+---------+------------+--------+---------+-------+--------+----------+-------+-------+---------+---------+--------+
| OBJ0001 | My Object | Bob | | Bob | S00013 | Volts | 12 | 52 | 1 | 80 | S00032 |
+---------+------------+--------+---------+-------+--------+----------+-------+-------+---------+---------+--------+
This table would define all the configurable settings for the object. It could range from 1 settings to 25 settings. In this example, each setting the user adds to the object would have 6 parameters such as min/max allowed values, name, id etc.
If I do it this way, I would end up with over 100 columns many of which could be empty...
Object Settings Table
+---------+-------------+---------+------------+------+---------+---------+---------+
| SetID | Setname | ObjID | Date | User | Set1Val | Set2Val | Set3Val |
+---------+-------------+---------+------------+------+---------+---------+---------+
| SID0045 | My Settings | OBJ0001 | 12-12-2017 | Bob | 12 | 32 | 98 |
+---------+-------------+---------+------------+------+---------+---------+---------+
In this table, each row would define a user's settings profile for that object - basically just the value for the settings which were defined in the object table. Each user could have a different set of settings for the same object when it's used in their project.
So, the above method seems bad to me. It makes sense in my head but the number of columns will get out of control when allowing multiple settings.
I suppose the better way of doing this would be to go vertical by adding a row for each setting or setting column but I'm just not sure how this would look. How can I structure it this way while still allowing the "sharing" of objects between user projects?
I am currently in the process of converting the player saving features of a game's multi-player engine into an SQL database for the integration of a webpage to display/modify/sell characters. The original system stored all data into text files which was an awful way of dealing with this data as it was fixed to the game only. Within the Text files the user's Username, Password, ID, and player-data was stored, allowing for only one character. I have separated this into tables and can successfully save and load character data. The tables I have are quite large so for example purposes I will use the following:
account:
+----+----------+----------+
| ID | Username | Password |
+----+----------+----------+
| 1 | Player1 | 123456 | (Secure passwords much?)
| 2 | Player2 | password | (These are actually hashed in the real db)
+----+----------+----------+
account_character:
+------------+--------------+
| Account_ID | Character_ID |
+------------+--------------+
| 1 | 1 |
| 1 | 2 |
| 2 | 3 |
+------------+--------------+
character:
+----+-----------+-----------+-----------+--------+--------+
| ID | PositionX | PositionY | PositionZ | Gender | Energy | etc....
+----+-----------+-----------+-----------+--------+--------+
| 1 | 100 | 150 | 1 | 1 | 100 |
| 2 | 30 | 90 | 0 | 1 | 100 |
| 3 | 420 | 210 | 2 | 0 | 53.5 |
+----+-----------+-----------+-----------+--------+--------+
These tables are linked using relationships.
What I have so far is, the user logs in and the server queries their username and matches the password. If the passwords match, the server begins to load the character data based on the ID loaded from the account during logging in.
This is where I am stuck. I have successfully done this through phpmyadmin using the SQL command interface, but as it was around 4AM I was tired and accidentally closed the tab that contained the command before I saved it. I have tried to replicate this but I simply cannot obtain the data I require in the query.
I've recently completed a course in databases at college and got a distinction, but for the life of me I cannot get this to work again... I have followed tutorials but as the situations usually differ from mine I cannot apply them until I understand them. I know I'm going to kick myself once I have a working command.
Tl;dr - I wish to query all character data linked to an account using the account's 'ID'.
I think this should work:
SELECT
*
FROM
account_character ac
INNER JOIN account a ON ac.Account_ID = a.ID
INNER JOIN character c on ac.Character_ID = c.ID
WHERE
account.Username = ? AND
account.Password = ?
;
We start by joining together all the relevant tables, and then filter to get characters just for the current user.
I'm using the (deprecated) CSV Plugin in SonarQube to create some analysis. Is there a way to get the same information by using the web api ?
"same information" means in my case:
FullClasspath | Metric 1 | Metric 2 | ... | Metric n ---------------------------------------------------------- org.myClass1 | Value 1 | Value 2 | ... | Value n org.myClass2 | Value 1 | Value 2 | ... | Value n org.myClass3 | Value 1 | Value 2 | ... | Value n
What I need is a combination of getting all Metrics and receiving all "Classes" instead of Issues.
I'd like to use SonarQube now and in the future. This is why I'd prefer to alter my setup to use the Web Api.
Best Regards
EDIT: Solution
The request I have to send at my sonar server has this structure:
SERVER/api/resources?resource=COMPANY:PROJECT&depth=-1&metrics=ALLNEEDEDMETRICS
for example: http://nemo.sonarqube.org/api/resources?resource=org.codehaus.sonar:sonar&depth=-1&metrics=ncloc,complexity,class_complexity,violations_density,duplicated_blocks,ca
Have a look to the following "/api/resources" web service documentation : http://docs.codehaus.org/pages/viewpage.action?pageId=229743280
I am stuck and need help/advise. I am pretty sure that I am not the first one to run into this problem, but I can't seem to find the answer on the web.
We are collecting all kinds of data from many factories. This is mainly forecasted values of yearly peak production, etc. This data collection is repeated every year.
We currently keep track of this data in Excel, which has the following structure:
Factory | 2010 | 2011 | 2012 | ..
----------------------------------
A | 20 | 30 | 28 | ..
B | | 39 | 55 | ..
In this example factory B just starts production in the year 2011. If we collect data for an additional year, we simply add a column. If forecasted data changes, we simply enter the new values and lose the old ones. You can imagine that this way of working has its limitations: the table becomes rather sparse for missing data. Old values cannot be traced back. There is no reference to the source of the values.
To satisfy our need for a better system, I put my antique knowledge of databases to work. In Access 2007 I created the following structure:
Table: Factories
FacID | FactoryName
---------------------
1 | A
2 | B
Table: Sources
SouID | Source | SourceDate
---------------------------------
1 | DocumentX | Sep. 2009
2 | DocumentY | Jan. 2010
Table: Parameters
ParID | FacID | SouID | ParamType | Year | Value
------------------------------------------------------
1 | 1 | 1 | PeakProduction | 2010 | 20
2 | 1 | 1 | PeakProduction | 2011 | 30
3 | 1 | 1 | PeakProduction | 2012 | 28
4 | 2 | 1 | PeakProduction | 2011 | 39
5 | 2 | 2 | PeakProduction | 2012 | 55
For each new data collection we just add a new source document and append the Parameters table. In this way, we can always revert back to old data. Furthermore, if additional years are collected, there is no need to add additional columns to any table.
Although the actual setup is more complex, above examples is sufficient to illustrate the problem that I am running into: To enter data into the database I would like to have a single form which resembles the original Excel sheet lay-out, i.e.:
Factory | 2010 | 2011 | 2012 |
------------------------------
A | | | |
B | | | |
Of course, this form will have some drop-down menu to select the source document and parameter type ("PeakProduction" in the example).
My question: With crosstab queries it is easy to create such a view based of existing data in the database, however, entering new values is not allowed. What can I do to make this work, and how?
Should I reconsider the design of my database? Should I work with VBA? Link the Access database with Excel sheets?
Thanks!
Where you are dealing with 2 dimensional data that is normalised into a table it is problematic for the user to maintain using Access. My approach to this has been to use the appropriate tool for the job, which looks like Excel in this case. I have the an excel spreadsheet template for data entry. The user enters the data into that. Then in VBA I open the spreadsheet as an embedded object, retrieve the cell contents and insert it into the table. Something like below.
dim myRec as recordset
dim xlApp as Excel.application
dim xlWrksht as Excel.worksheet
set myRec=currentdb.openrecordset("NameOfTable")
set xlApp=createobject("Excel.Application")
set xlWrksht=xlApp.Open("PathOfWorksheet").Worksheets( "WorksheetNumber")
myrec.addnew
myrec.fields("NameOfFields")=xlWrkSht.cells(1,"A")
......
myRec.update