Get the value of COUNT(*) into an html table - html

I'm using a servlet to make the jdbc connection, write the PreparedStatements and execute ResultSets. I am able to display the data into a webpage just fine, however I also want to be able to count the number of entries. I know there are other ways to count how many rows I have using java code, but I want to use SQL statements and I saw this
SELECT COUNT(*) FROM table_name;
and made a preparedstatement and tried to execute. However, it is not returning the value of the count, I instead get
"com.mysql.jdbc.JDBC42ResultSet#4a9b1e8b" or "com.mysql.jdbc.JDBC42PreparedStatement#4a9b1e8b" (because I tried getting the value of the count using both).
Basically, I am wondering how to get the count value in my html table from the servlet, not the long statements above.
Many thanks, I'm a beginner.

When you execute your SQL query with JDBC, you get a Resultset even when you get only one record with only one field as in your question.
You need to call the getInt or the getLong method of your recordset to get the actual value.
long countValue = rs.getLong(1);
Have a look at Oracle's documentation on JDBC
You can also have a look at this post on SO

Related

Creating a global variable in Talend to use as a filter in another component

I have job in Talend that is designed to bring together some data from different databases: one is a MySQL database and the other a MSSQL database.
What I want to do is match a selection of loan numbers from the MySQL database (about 82,000 loan numbers) to the corresponding information we have housed in the MSSQL database.
However, the tables in MSSQL to which I am joining the data from MySQL are much larger (~ 2 million rows), are quite wide, and thus cost much more time to query. Ideally I could perform an inner join between the two tables based on the loan number, but since they are in different databases this is not possible. The inner join that is performed inside a tMap occurs after the Lookup input has already returned its data set, which is quite large (especially since this particular MSSQL query will execute a user-defined function for each loan number).
Is there any way to create a global variable out of the output from the MySQL query (namely, the loan numbers selected by the MySQL query) and use that global variable as an IN clause in the MSSQL query?
This should be possible. I'm not working in MySQL but I have something roughly equivalent here that I think you should be able to adapt to your needs.
I've never actually answered a Stackoverflow question and while I was typing this the page started telling me I need at least 10 reputation to post more than 2 pictures/links here and I think I need 4 pics, so I'm just going to write it out in words here and post the whole thing complete with illustrations on my blog in case you need more info (quite likely, I should think!)
As you can see, I've got some data coming out of the table and getting filtered by tFilterRow_1 to only show the rows I'm interested in.
The next step is to limit it to just the field I want to use in the variable. I've used tMap_3 rather than a tFilterColumns because the field I'm using is a string and I wanted to be able to concatenate single quotes around it but if you're using an integer you might not need to do that. And of course if you have a lot of repetition you might also want to get a tUniqueRows in there as well to save a lot of unnecessary repetition
The next step is the one that does the magic. I've got a list like this:
'A1'
'A2'
'B1'
'B2'
etc, and I want to turn it into 'A1','A2','B1','B2' so I can slot it into my where clause. For this, I've used tAggregateRow_1, selecting "list" as the aggregate function to use.
Next up, we want to take this list and put it into a context variable (I've already created the context variable in the metadata - you know how to do that, right?). Use another tMap component, feeding into a tContextLoad widget. tContextLoad always has two columns in its schema, so map the output of the tAggregateRows to the "value" column and enter the name of the variable in the "key". In this example, my context variable is called MyList
Now your list is loaded as a text string and stored in the context variable ready for retrieval. So open up a new input and embed the variable in the sql code like this
"SELECT distinct MY_COLUMN
from MY_SECOND_TABLE where the_selected_row in ("+
context.MyList+")"
It should be as easy as that, and when I whipped it up it worked first time, but let me know if you have any trouble and I'll see what I can do.

MS Access how can i pass my search criteria from top query to subquery?

I have a query in a base-data table that (given that my search criteria are correct) gives back approx. 950 records.
Except of the 3 criteria fields, i want to have about 10 more fields (the Project is still at the beginning) , every single one based on sub-queries, some of them normal select queries, some are aggregated queries.
As far as i know every sub-query must give 1 and only one value back.
This value school be individual for every Record of the top query.
My Problem now is, that i don't know how to pass the search criteria from the top query (simple select query) to the sub-query in the in 10 fields i mentioned before.
Is this possible at all, or is my Approach to complicated. Is there possibly an easier way?
I have a Windows 7 System with Office 2010 installed.
Your help is much appreciated.
Thanks a lot.
PS
The sub-queries are based on the same table as the top query. Sorry, I forgot to mention.
You can pass arguments between things with a function call to set a public variable. This vba must be in a Module, not behind a Form Module. I don't use this approach very often, because the global value is in volatile memory, I prefer to save the variable in a special data Table.
Public strGlobal As String
Function Func_ReadGlobal() As String
Func_ReadGlobal = strGlobal
End Function
Function Func_WriteGlobal() As String
strGlobal = Func_WriteGlobal
End Function
In all subqueries create parameter(s) and use it as search criteria. Parameter name should be the same for same column. Now, if you use those subqueries in your main query, Access will ask only once per each parameter name, you don't need to pass them explicitly to subqueries.
Thank you guys.
I did'nt think of the most obvious solution with the Globals. I will try it out as soon as my Boss gives me the time to continue with the Project.
#Sergey
I can't use the Parameter(s) way, because the whole query, incl. Subqueries shall run completely alone in VBA, without human input at all.

SQLAlchemy - relationship loading outside of query

We are in need of a way to load a relationship explicitly outside of a query and from an given set of cache values.
Our query is quite complicated and have a few explicit joinedLoad(..) option. Sadly using too much of those is really slowing the query as a whole and so we started using subqueryLoad(..) technique. However this does not work as expected, subqueryLoad is emitting a second query using a distinct clause on the first query (which is quite costly). What we are trying to do instead is to build a set of the relationship key we need to load once the first query has finished to run. Once we get back the result from this second query, how do we tell sqlalchemy to associate the result of the first query with the result of the second query ?
Here a snippet showing what we do for now (it works but it is quite ugly):
result = session.query(tableA).option(lazyload('*'), joinedload(tableB)).all()
relationship_keys = set()
for r in result:
relationship_keys.add(r.tableC_id)
cache_relationships = session.query(tableC).filter(tableC.id.in_(relationship_keys)).all()
# link instance between them. This will not emit SQL as it will hit the cache previously loaded by using session.get(..)
[r.tableC_relationship for r in result]

group by in rails return strange data

I am trying to apply group active record command in rails rest api, however my database is in MySql.
When I query without group by I get correct data but when I use group on the same query I get strange data collection. I am using group to decrease query time coz in original it takes alot of time to retrieve data from database
Here is my original query
Records.owned_by(User.find_by_email(params[:user].to_s).id).where(device_id: params[:did]).includes(:record_students, :record_employees, :record_admins, :record_others)
but when I use group to increase the efficiency the returned data set is not valid
here is my new query with group
Records.owned_by(User.find_by_email(params[:user].to_s).id).where(device_id: params[:did]).includes(:record_students, :record_employees, :record_admins, :record_others).group("date(created_at)")
any idea what is wrong. Thanks

Linq-to-Sql Count

I need to do a count on the items in a joined result set where a condition is true. I thus have a "from join where where" type of expression. This expression must end with a select or groupby. I do not need the column data actually and figure it is thus faster not to select it:
count = (from e in dc.entries select new {}).Count();
I have 2 questions:
Is there a faster way to do this in terms of the db load?
I have to duplicate my entire copy of the query. Is there a way to structure my query where I can have it one place for both counts and for getting say a list with all fields?
Thanks.
Please pay especial attention:
The query is a join and not a simple table thus I must use a select statement.
I will need 2 different query bodies because I do not need to load all the actual fields for the count but will for the list.
I assume when I use the select query it is filling up with data when I use query.Count vs Table.Count. Look forward to those who understand what I'm asking for possible better ways to do this and some detailed knowledge of what actually happens. I need to pull out the logging to look into this deeper.
Queryable.Count
The query behavior that occurs as a
result of executing an expression tree
that represents calling
Count(IQueryable)
depends on the implementation of the
type of the source parameter. The
expected behavior is that it counts
the number of items in source.
In fact, if you use LinqToSql or LinqToEntities, Queryable.Count() is sent into the database. No columns are loaded to memory. Check the generated sql to confirm.
I assume when I use the select query it is filling up with data when I use query.Count vs Table.Count
This is not true. Check the generated sql to confirm.
I have to duplicate my entire copy of the query. Is there a way to structure my query where I can have it one place for both counts and for getting say a list with all fields
If you need both the count and the list, get the list and count it.
If you need the count sometimes and other times you need the list... write a method that returns the complex IQueryable, and sometimes call .Count() and other times call .ToList();
I do not need the column data actually and figure it is thus faster not to select it.
This is basically false in your scenario. It can be true in a scenario where an index covers the result columns, but you don't have any result columns.
In your scenario, whatever index is chosen by the query optimizer, that index can be used to make the count.
Sum up: Query optimizer will perform the optimization you desire.
//you can put a where condition here
var queryEntries = from e in dc.entries select e;
//Get count
queryEntries.Count();
//Loop through Entries, so you basically returned all entries
foreach(entry en in queryEntries)
{}