I'm using the (deprecated) CSV Plugin in SonarQube to create some analysis. Is there a way to get the same information by using the web api ?
"same information" means in my case:
FullClasspath | Metric 1 | Metric 2 | ... | Metric n ---------------------------------------------------------- org.myClass1 | Value 1 | Value 2 | ... | Value n org.myClass2 | Value 1 | Value 2 | ... | Value n org.myClass3 | Value 1 | Value 2 | ... | Value n
What I need is a combination of getting all Metrics and receiving all "Classes" instead of Issues.
I'd like to use SonarQube now and in the future. This is why I'd prefer to alter my setup to use the Web Api.
Best Regards
EDIT: Solution
The request I have to send at my sonar server has this structure:
SERVER/api/resources?resource=COMPANY:PROJECT&depth=-1&metrics=ALLNEEDEDMETRICS
for example: http://nemo.sonarqube.org/api/resources?resource=org.codehaus.sonar:sonar&depth=-1&metrics=ncloc,complexity,class_complexity,violations_density,duplicated_blocks,ca
Have a look to the following "/api/resources" web service documentation : http://docs.codehaus.org/pages/viewpage.action?pageId=229743280
Related
Community!
Story: I am trying to upload a CSV file with a huge batch of products to my e-commerce shop. But there are many very similar products, but all with every column slightly different. And luckily the plugin I use can handle this, but it needs the same title for the entire product range or some reference to its parent product. The reference is sadly not there.
Now I want to know how I can find values in a CSV file that are nearly the same (in SQL there was something called '%LIKE%') to structure the table appropriately. I can hardly describe what I want to achieve, but here is an example for what I'm looking for.
I basically want to transform this table:
+---------------+---------------+---------------+---------------+
| ID | Title | EAN | ... |
+---------------+---------------+---------------+---------------+
| 1 | AquaMat 3.6ft | 1234567890 | ... |
+---------------+---------------+---------------+---------------+
| 2 | AquaMat 3.8ft | 1234567891 | ... |
+---------------+---------------+---------------+---------------+
| 3 | AquaMat 4ft | 1234567892 | ... |
+---------------+---------------+---------------+---------------+
into this:
+---------------+---------------+---------------+---------------+
| ID | Title | EAN | ... |
+---------------+---------------+---------------+---------------+
| 1 | AquaMat | 1234567890 | ... |
+---------------+---------------+---------------+---------------+
| 2 | AquaMat | 1234567891 | ... |
+---------------+---------------+---------------+---------------+
| 3 | AquaMat | 1234567892 | ... |
+---------------+---------------+---------------+---------------+
The extra data can be scraped. Can I do this with Excel? With Macros? With Python?
Thank you for taking time and reading this.
If you have any questions, than feel free to ask.
EDIT:
The Title column contains products with completely different names and might even contain more whitespaces. And some products might have 1 attribute but others have up to 3 attributes. But this can be sorted manually.
And with nearly the same I mean as you can see in the table. The Title's are basically the same but not identical. I want to remove the attributes from them. Also, there are no other columns with any more details, only numbers and the attributes that I am trying to cut of the title!!!
Here's an idea using Python and .split():
import csv
with open('testfile.csv', 'r', encoding="utf-8-sig") as inputfile:
csv_reader = csv.reader(inputfile, delimiter=',')
with open('outputfile.csv', 'w', newline='') as outputfile:
w = csv.writer(outputfile)
header=['ID','Title','EAN','Product','Attr1','Attr2','Attr3']
w.writerow(header)
for row in csv_reader:
if row[0]=='ID':
header_row=True
pass
else:
header_row=False
list=row[1].split()
for item in list:
# if you want, you can add some other conditions on the attribute (item) in here
row.append(item)
if not header_row:
print('row: {}'.format(row))
w.writerow(row)
I think we're going to need more information about what, exactly you're trying to achieve. Is it just the extra text after the "Aquamat" (for example) that you want to remove? If so, you could simply loop through the csv file and remove anything after "Aquamat" in the "Title" column.
I assume from your description, though, that there is more to it than this.
Perhaps a starting point would be to let us know what you mean by "nearly the same". Do you want exactly what SQL means by LIKE, or something different?
EDIT:
You might check out Python's Regular Expressions: Here. If your "nearly the same" can be translated into a regex expression as described in the docs, then you could use python to loop through the csv file and search/replace terms based on the regular expression.
Are all the "nearly the same" things in the "Title" column, or could they be in other columns as well?
I'm new to grafana and playing around to see if it could fit my needs for a research lab.
I'm using grafana-server Version 4.5.2 (commit: ec2b0fe)
I tried to follow the grafana documentation about mysql datasources (sorry I'm not allowed to post more than two links, just try to search in your favorite search engine...)
I have succefully added a MySQL data source.
Here is my database :
mysql> DESC meteo;
+-------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+----------+------+-----+---------+----------------+
| id | int(100) | NO | PRI | NULL | auto_increment |
| date_insert | datetime | NO | | NULL | |
| temperature | float | NO | | NULL | |
+-------------+----------+------+-----+---------+----------------+
Following the documentation I've added a panel "Table" with the following query...
SELECT
date_insert as 'Date',
temperature as 'Temperature'
FROM meteo
...and choosen "Format as Table"
The result is ok as you can see.
Grafana Panel Format Table
Now I would like to have a graph like this :
Grafana Panel Format Time series
How can I achieve this with my database ? I don't understand the doc which says :
If you set Format as to Time series, for use in Graph panel for example,
then there are some requirements for what your query returns.
Must be a column named time_sec representing a unix epoch in seconds.
Must be a column named value representing the time series value.
Must be a column named metric representing the time series name.
How can I apply this with my database ? Is it just possible ?
Here is the solution, thanks to the Grafana team !
daniellee's answer
I'm working on a database structure for a big project and I'm wondering what method use for the logs table.
I'm using Laravel 5.* with Eloquent.
This table will contain, User_id, User-Agent, IP, DNS, Lang....
Method A :
LOGS_TABLE :
| Id | user_id | dns | ip | user_agent .... |
|-----|----------|-----------------|----------|---------------------|
| 1 | 5 | dns.google.com | 8.8.8.8 | firefox.*........ |
Method B :
LOGS TABLE :
| Id | dns_id | ip_id | user_agent_id | |
|----|--------|-------|---------------|--|
| 1 | 1 | 1 | 1 | |
IP TABLE:
| Id | value |
|----|---------|
| 1 | 8.8.8.8 |
The problem is, there is 10 fields like this and I'm afraid that all the jointures will slowed the requests.
Why we save all the logs ? :
Our tool provide a complete and high standing IP filtering service. The purpose is to let our customers filter their advertised traffic, and choose who is seing their website exactly.
The main purpose is to choose excactly which page they want to send Facebook on, while advertising on Facebook for example.
All the traffic of the service is due to the visitor visiting ads of our customers.
Technically we just do a 301 redirect to the good page and we log the user data on our database.
Thank's for you help.
What do you want to achieve with the log database? If it is just inserting data, I would go for a denormalized table (option 1).
If you want to also select data on every request, both options will slow your application down. You should take a look at a nosql Database maybe.
partitioning
Another option can be to use partitioning, see: https://laracasts.com/discuss/channels/eloquent/partition-table
In this case you can work with a checksum of the unique data and store the corresponding data in a table with a prefix.
For example: $checksum = 'pre03k3I03fsk34jks354jks35m..';, store in table logs_p or logs_pr.
Do not forget to put an index on the checksum column.
I have a table for holding translations. It is laid out as follows:
id | iso | token | content
-----------------------------------------------
1 | GB | test1 | Test translation 1 (English)
2 | GB | test2 | Test translation 2 (English)
3 | FR | test1 | Test translation 1 (French)
4 | FR | test2 | Test translation 2 (French)
// etc
For the translation management tool to go along with the table I need to output it in something more like a spreadsheet grid:
token | GB | FR | (other languages) -->
-------------------------------------------------------------------------------------------
test1 | Test translation 1 (English) | Test translation 1 (French) |
test2 | Test translation 1 (French) | Test translation 2 (French) |
(other tokens) | | |
| | | |
| | | |
V | | |
I thought this would be easy, but it turned out to be far more difficult than I expected!
After a lot of searching and digging around I did find group_concat, which for the specific case above I can get to work and generate the output I'm looking for:
select
token,
group_concat(if (iso = 'FR', content, NULL)) as 'FR',
group_concat(if (iso = 'GB', content, NULL)) as 'GB'
from
translations
group by token;
However, this is, of course, totally inflexible. It only works for the two languages I have specified so far. The instant I add a new language I have to manually update the query to take it into account.
I need a generalized version of the query above, that will be able to generate the correct table output without having to know anything about the data stored in the source table.
Some sources claim you can't easily do this in MySQL, but I'm sure it must be possible. After all, this is the sort of thing databases exist for in the first place.
Is there a way of doing this? If so, how?
Because of mysql limitations, I need to do something like this on query side and in 1 query, I would do it like this:
query:
select token, group_concat(concat(iso,'|',content)) as contents
from translations
group by token
"token";"contents"
"test1";"GB|Test translation 1 (English),FR|Test translation 1
(French),IT|Test translation 1 (Italian)" "test2";"GB|Test translation
2 (English),FR|Test translation 2 (French),IT|Test translation 2
(Italian)"
Than While I am binding rows I could split from comma to rows and split from pipe for header..
What you seek is often called a dynamic crosstab wherein you dynamically determine the columns in the output. Fundamentally, relational databases are not designed to dynamically determine the schema. The best way to achieve what you want is to use a middle-tier component to build the crosstab SQL statement similar to what you have shown and then execute that.
I am trying to figure out how i can let MS Access use a field value that is 3 rows lower.
The data is from an external source which retrieves SNMP data every week. I linked a table in Access to the txt output file.
Here is a sample:
| Device | IP Address | Uptime | SNMP Custom |
--------------------------------------------------
| Router | 192.168.. | 1 day, 1h | IOS version |
Now when i want to get more information of the devices, Cisco descided it was needed to add new lines to the output file so now the linked table looks like:
| Device | IP Address | Uptime | SNMP Custom | SNMP Custom 2
-----------------------------------------------------------------
| Router | 192.168.. | 1 day, 1h | IOS version |
| Technical Support: sometext
| Copyright (c) sometext
| Compiled | ABCD
Now those 4 lines are from 1 device and the ABCD should be in the SNMP Custom 2 field. The exessive rows i can simply delete but i have no idea how to move the ABCD value to the SNMP Custom 2 field.
Can this be done using MS Access(VB?) or classic ASP? Any thoughts are greatly appreciated.
Thanks in advance
If I've understood your OP correctly, try try the following in an Access query:
UPDATE myTable SET [SNMP Custom 2] = [IP Address], [IP Address] = "" WHERE [Device] = "Compiled"