Solidity setting a mapping to empty - ethereum

I am trying to create a smart contract using Solidity 0.4.4.
I was wondering if there is a way to set a mapping with some values already entered to an empty one?
For example:
This initailises a new mappping
mapping (uint => uint) map;
Here I add some values
map[0] = 1;
map[1] = 2;
How can I set the map back to empty without iterating through all the keys?
I have tried delete but my contract does not compile

Unfortunately, you can't. See the Solidity documentation for details on the reasons why. Your only option is to iterate through the keys.
If you don't know your set of keys ahead of time, you'll have to keep the keys in a separate array inside your contract.

I believe there is another way to handle this problem.
If you define your mapping with a second key, you can increment that key to essentially reset your mapping.
For example, if you wanted to your mapping to reset every year, you could define it like this:
uint256 private _year = 2021;
mapping(uint256 => mapping(address => uint256)) private _yearlyBalances;
Adding and retrieving values works just like normal, with an extra key:
_yearlyBalances[_year][0x9101910191019101919] = 1;
_yearlyBalances[_year][0x8101810181018101818] = 2;
When it's time to reset everything, you just call
_year += 1

Related

In what slots are variables stored that are defined after an array in Solidity?

So I know how arrays are stored in storage. If I understand it correctly it first stores the number of items in an array in the first slot, and then in the next slots, it stores the hashed values.
My question is what if I define uint after the array and the array during deployment has only 2 values. So it should take up 3 slots. Then in the fourth slot is the uint I defined.
What if there is a function that will push something to the array? How is it stored?
Will it be stored in the next free slot? Or will it push the uint to the next slot and replace it with the new value?
I hope the question is clear if not I will try to rephrase it.
Also if there is some good resource where I can learn all about storage in solidity please share the link.
Thanks a lot!
Fixed-size array stores its values in sequential order, starting with the 0th index. There's no prepended slot that would show the total length. Any unset values use the default value of 0.
pragma solidity ^0.8;
contract MyContract {
address[3] addresses; // storage slots 0, 1, 2
uint256 number; // storage slot 3
constructor(address[2] memory _addresses, uint256 _number) {
addresses = _addresses;
number = _number;
}
}
Passing 2 addresses to the constructor, storage slot values in this case:
0: _addresses[0]
1: _addresses[1]
2: default value of zero (third address was not defined)
3: _number
Dynamic-size array stores its values in keys that are hash of the property storage slot (in example below that's 0, as that's the first storage property), and immediately following slots. In the property storage slot, it stores the array length.
pragma solidity ^0.8;
contract MyContract {
/*
* storage slots:
* p (in this case value 0, as this is the first storage property) = length of the array
* keccak256(p) = value of index 0
* keccak256(p) + 1 = value of index 1
* etc.
*/
address[] addresses;
// storage slot 1
uint256 number;
constructor(address[] memory _addresses, uint256 _number) {
addresses = _addresses;
number = _number;
}
}
Passing 2 addresses to the constructor, storage slot values in this case:
0: value 2 (length of the array)
1: _number
0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563 (hash of uint 0): _addresses[0]
0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e564 (hash of uint 0, plus 1): _addresses[1]
Docs: https://docs.soliditylang.org/en/v0.8.13/internals/layout_in_storage.html#mappings-and-dynamic-arrays
So to answer your questions:
What if there is a function that will push something to the array? How is it stored?
Will it be stored in the next free slot? Or will it push the uint to the next slot and replace it with the new value?
Fixed-size arrays cannot be resized. You can only rewrite its values, while the default value of each item is 0.
In case of dynamic-size arrays, it pushes the new value right after the last one. Since they are stored in slots which indexes are based on a hash, the probability of rewriting another value is practically 0 (i.e. that would mean a hash collision).
In both cases, it doesn't affect how other storage properties are stored.

How do you avoid the EF Core 3.0 "Restricted client evaluation" error when trying to order data from a one-to-many join?

I have a table I'm displaying in a site which is pulling data from a few different SQL tables. For reference, I'm following this guide to set up a sortable table. Simplify the model, say I have a main class called "Data" which looks like this (while the Quotes class stores the DataID):
namespace MyProject.Models
{
public class Data
{
public int Id { get; set; }
public string Name { get; set; }
public int LocationId { get; set; }
public Models.Location Location { get; set; }
public IList<Models.Quote> Quotes { get; set; }
}
}
Then I retrieve the IQueryable object using this code:
IQueryable<Models.Data> dataIQ = _context.Data
.Include(d => d.Quotes)
.Include(d => d.Location);
The "Quotes" table is a one-to-many mapping while the location is a one-to-one. How do I order the IQueryable object by a value in the Quotes table? Specifically I'm trying to do this when the user clicks a filter button. I tried just doing this on the first item in the list (which is guaranteed to be populated) but that throws the client-evaluation error I mentioned in the title. This is the code I'm using to apply the sorting:
//This one throws the client-evaluation error
dataIQ = dataIQ.OrderByDescending(d => d.Quotes[0].QuoteName);
//This one works as expected
dataIQ = dataIQ.OrderByDescending(d => d.Location.LocationName);
So you have a table called Models, filled with objects of class DataItems. and you have a table Quotes. There is a on-to-many relations between DataItems and Quotes: Every DataItem has zero of more Quotes, every Quote belongs to exactly one DataItem, namely the DataItem that the foreign key DataItemId refers to.
Furthermore, every Quote has a property QuoteName.
Note that I changed the identifier of your Data class, to DataItem, so it would be easier for me to talk in singular and plural nouns when referring to one DataItem or when referring to a collection of DataItems.
You want to order your DataItems, in ascending value of property QuoteName of the first Quote of the DataItem.
I see two problems:
What if a DataItem doesn't have any quotes?
Is the term "First Quote` defined: if you look at the tables, can you say: "This is the first Quote of DataItem with Id == 4"?
This is the reason, that it usually is better to design a one-to-many relation using virtual ICollection<Quote>, then using virtual IList<Quote>. The value of DataItem[3].Quotes[4] is not defined, hence it is not useful to give users access to the index.
But lets assume, that if you have an IQueryable<Quote>, that you can define a "the first quote". This can be the Quote with the lowest Id, or the Quote with the oldest Date. Maybe if it the Quote that has been Quoted the most often. In any case, you can define an extension method:
public static IOrderedQueryable<Quote> ToDefaultQuoteOrder(this IQueryable<Quote> quotes)
{
// order by quote Id:
return quotes.OrderBy(quote => quote.Id);
// or order by QuoteName:
return quotes.OrderBy(quote => quote.QuoteName);
// or a complex sort order: most mentioned quotes first,
// then order by oldest quotes first
return quotes.OrberByDescending(quote => quote.Mentions.Count())
.ThenBy(quote => quote.Date)
.ThenBy(quote => quote.Id);
}
It is only useful to create an extension method, if you expect it to be used several times.
Now that we've defined a order in your quotes, then from every DataItem you can get the first quote:
DataItem dataItem = ...
Quote firstQuote = dataItem.Quotes.ToDefaultQuoteOrder()
.FirstOrDefault();
Note: if the dataItem has no Quotes at all, there won't be a firstQuote, so you can't get the name of it. Therefore, when concatenating LINQ statements, it is usually only a good idea to use FirstOrDefault() as last method in the sequence.
So the answer of your question is:
var result = _context.DataItems.Select(dataItem => new
{
DataItem = dataItem,
OrderKey = dataItem.Quotes.ToDefaultQuoteOrder()
.Select(quote => quote.QuoteName)
.FirstOrDefault(),
})
.OrderBy(selectionResult => selectionResult.OrderKey)
.Select(selectioniResult => selectionResult.Data);
The nice thing about the extension method is that you hide how your quotes are ordered. If you want to change this, not order by Id, but by Oldest quote date, the users won't have to change.
One final remark: it is usually not a good idea to use Include as a shortcut for Select. If DataItem [4] has 1000 Quotes, then every of its Quote will have a DataItemId with a value of 4. It is quite a waste to send this value 4 for over a thousand times. When using Select you can transport only the properties that you actually plan to use:
.Select(dataItem => new
{
// Select only the data items that you plan to use:
Id = dataItem.Id,
Name = dataItem.Name,
...
Quotes = dataItem.Quotes.ToDefaultQuoteOrder().Select(quote => new
{
// again only the properties that you plan to use:
Id = quote.Id,
...
// not needed, you know the value:
// DataItemId = quote.DataItemId,
})
.ToList(),
});
In entity framework always use Select to select data and select only the properties that you really plan to use. Only use include if you plan to change / update the included data.
Certainly don't use Include because it saves you typing. Again: whenever you have to do something several times, create a procedure for it:
As an extension method:
public static IQueryable<MyClass> ToPropertiesINeed(this IQueryable<DataItem> source)
{
return source.Select(item => new MyClass
{
Id = item.Id,
Name = item.Name,
...
Quotes = item.Quotes.ToDefaultQuoteOrder.Select(...).ToList(),
});
}
Usage:
var result = var result = _context.DataItems.Where(dataItem => ...)
.ToPropertiesINeed();
The nice thing about Select is that you separate the structure of your database from the actually returned data. If your database structure changes, users of your classes won't have to see this.
Ok, I think I figured it out (at least partially**). I believe I was getting the error because what I had was really just not correct syntax for a Linq query--that is I was trying to use a list member in a query on a table that it didn't exist in (maybe?)
Correcting the syntax I was able to come up with this, which works for my current purposes. The downside is that it's only sorting by the first item in the link. I'm not sure how you'd do this for multiple items--would be interested to see if anyone else has thoughts
dataIQ = dataIQ.OrderByDescending(d => d.Quotes.FirstOrDefault().QuoteName);
**Edit: confirmed this is only partially fixing my issue. I'm still getting the original error if I try to access a child object of Quotes. Anyone have suggestions on how to avoid this error? The below example still triggers the error:
IQueryable<Models.Data> dataIQ = _context.Data
.Include(d => d.Quotes).ThenInclude(q => q.Owner)
.Include(d => d.Location);
dataIQ = dataIQ.OrderByDescending(d => d.Quotes.FirstOrDefault().Owner.OwnerName);

Best way to cache results of method with multiple parameters - Object as key in Dictionary?

At the beginning of a method I want to check if the method is called with these exact parameters before, and if so, return the result that was returned back then.
At first, with one parameter, I used a Dictionary, but now I need to check 3 parameters (a String, an Object and a boolean).
I tried making a custom Object like so:
var cacheKey:Object = { identifier:identifier, type:type, someBoolean:someBoolean };
//if key already exists, return it (not working)
if (resultCache[cacheKey]) return resultCache[cacheKey];
//else: create result ...
//and save it in the cache
resultCache[cacheKey] = result;
But this doesn't work, because the seccond time the function is called, the new cacheKey is not the same object as the first, even though it's properties are the same.
So my question is: is there a datatype that will check the properties of the object used as key for a matching key?
And what else is my best option? Create a cache for the keys as well? :/
Note there are two aspects to the technical solution: equality comparison and indexing.
The Cliff Notes version:
It's easy to do custom equality comparison
In order to perform indexing, you need to know more than whether one object is equal to another -- you need to know which is object is "bigger" than the other.
If all of your properties are primitives you should squash them into a single string and use an Object to keep track of them (NOT a Dictionary).
If you need to compare some of the individual properties for reference equality you're going to have a write a function to determine which set of properties is bigger than the other, and then make your own collection class that uses the output of the comparison function to implement its own a binary search tree based indexing.
If the number of unique sets of arguments is in the several hundreds or less AND you do need reference comparison for your Object argument, just use an Array and the some method to do a naive comparison to all cached keys. Only you know how expensive your actual method is, so it's up to you to decide what lookup cost (which depends on the number of unique arguments provided to the function) is acceptable.
Equality comparison
To address equality comparison it is easy enough to write some code to compare objects for the values of their properties, rather than for reference equality. The following function enforces strict set comparison, so that both objects must contain exactly the same properties (no additional properties on either object allowed) with the same values:
public static propsEqual(obj1:Object, obj2:Object):Boolean {
for(key1:* in obj1) {
if(obj2[key1] === undefined)
return false;
if(obj2[key1] != obj2[key1])
return false;
}
for(key2:* in obj2)
if(obj1[key2] === undefined)
return false;
return true;
}
You could speed it up by eliminating the second for loop with the tradeoff that {A:1, B:2} will be deemed equal to {A:1, B:2, C:'An extra property'}.
Indexing
The problem with this in your case is that you lose the indexing that a Dictionary provides for reference equality or that an Object provides for string keys. You would have to compare each new set of function arguments to the entire list of previously seen arguments, such as using Array.some. I use the field currentArgs and the method to avoid generating a new closure every time.
private var cachedArgs:Array = [];
private var currentArgs:Object;
function yourMethod(stringArg:String, objArg:Object, boolArg:Boolean):* {
currentArgs = { stringArg:stringArg, objArg:objArg, boolArg:boolArg };
var iveSeenThisBefore:Boolean = cachedArgs.some(compareToCurrent);
if(!iveSeenThisBefore)
cachedArgs.push(currentArgs);
}
function compareToCurrent(obj:Object):Boolean {
return someUtil.propsEqual(obj, currentArgs);
}
This means comparison will be O(n) time, where n is the ever increasing number of unique sets of function arguments.
If all the arguments to your function are primitive, see the very similar question In AS3, where do you draw the line between Dictionary and ArrayCollection?. The title doesn't sound very similar but the solution in the accepted answer (yes I wrote it) addresses the exact same techinical issue -- using multiple primitive values as a single compound key. The basic gist in your case would be:
private var cachedArgs:Object = {};
function yourMethod(stringArg:String, objArg:Object, boolArg:Boolean):* {
var argKey:String = stringArg + objArg.toString() + (boolArg ? 'T' : 'F');
if(cachedArgs[argKey] === undefined)
cachedArgs[argKey] = _yourMethod(stringArg, objArg, boolArg);
return cachedArgs[argKey];
}
private function _yourMethod(stringArg:String, objArg:Object, boolArg:Boolean):* {
// Do stuff
return something;
}
If you really need to determine which reference is "bigger" than another (as the Dictionary does internally) you're going to have to wade into some ugly stuff, since Adobe has not yet provided any API to retrieve the "value" / "address" of a reference. The best thing I've found so far is this interesting hack: How can I get an instance's "memory location" in ActionScript?. Without doing a bunch of performance tests I don't know if using this hack to compare references will kill the advantages gained by binary search tree indexnig. Naturally it would depend on the number of keys.

How to get the auto-increment primary key value in MySQL using Hibernate

I'm using Hibernate to access MySQL, and I have a table with an auto-increment primary key.
Everytime I insert a row into the table I don't need to specify the primary key. But after I insert a new row, how can I get the relative primary key immediately using hibernate?
Or I can just use jdbc to do this?
When you save the hibernate entity, the id property will be populated for you. So if you have
MyThing thing = new MyThing();
...
// save the transient instance.
dao.save(thing);
// after the session flushes, thing.getId() should return the id.
I actually almost always do an assertNotNull on the id of a persisted entity in my tests to make sure the save worked.
Once you're persisted the object, you should be able to call getId() or whatever your #ID column is, so you could return that from your method. You could also invalidate the Hibernate first level cache and fetch it again.
However, for portability, you might want to look at using Hibernate with sequence style ID generation. This will ease the transition away from MySQL if you ever need to. Certainly, if you use this style of generator, you'll be able to get the ID immediately, because Hibernate needs to resolve the column value before it persists the object:
#Id
#GeneratedValue (generator="MY_SEQ")
#GenericGenerator( name = "MY_SEQ",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
#Parameter(name = "sequence_name", value = "MY_SEQ"),
#Parameter(name = "initial_value", value = "1"),
#Parameter(name = "increment_size", value = "10") }
)
#Column ( name = "id", nullable = false )
public Long getId () {
return this.id;
}
It's a bit more complex, but it's the kind of thing you can cut and paste, apart from changing the SEQUENCE name.
When you are calling a save() method in Hibernate, the object doesn't get written to the database immediately. It occurs either when you try to read from the database (from the same table?) or explicitly call flush(). Until the corresponding record is not inserted into the database table, MySQL would not allocate an id for it.
So, the id is available, but not before Hibernate actually inserts the record into the MySQL table.
If you want, you can get the next primary key independently of an object using:
Session session = SessionFactoryUtil.getSessionFactory().getCurrentSession();
Query query = session.createSQLQuery( "select nextval('schemaName.keySequence')" );
Long key = (Long) query.list().get( 0 );
return key;
Well in case of auto increment generator class, when we use the save() method it returns the primary key (assuming its id). So it returns that particular id, so you can do this
int id = session.save(modelClass);
And return id

Lists in User Defined Types (SQL Server 2008)

I'm trying to define a new type and have not had much luck finding any information about using lists within them. Basically my new type will contain two lists, lets say x and y of type SqlSingle (the user defined type is written in C#) is this even possible?
If not how are you supposed to go about simulating a two lists of an arbitary length in an SQL Server 2008 column?
I'm possibly going about this the wrong way but it is the best approach I can think of at the moment. Any help is very much appreciated.
You can use a List<T> in a CLR UDT - although CLR types are structs, which should be immutable, so a ReadOnlyCollection<T> would be a better choice if you don't have a very compelling reason for the mutability. What you need to know in either case is that SQL won't know how to use the list itself; you can't simply expose the list type as a public IList<T> or IEnumerable<T> and be on your merry way, like you would be able to do in pure .NET.
Typically the way to get around this would be to expose a Count property and some methods to get at the individual list items.
Also, in this case, instead of maintaining two separate lists of SqlSingle instances, I would create an additional type to represent a single point, so you can manage it independently and pass it around in SQL if you need to:
[Serializable]
[SqlUserDefinedType(Format.Native)]
public struct MyPoint
{
private SqlSingle x;
private SqlSingle y;
public MyPoint()
{
}
public MyPoint(SqlSingle x, SqlSingle y) : this()
{
this.x = x;
this.y = y;
}
// You need this method because SQL can't use the ctors
[SqlFunction(Name = "CreateMyPoint")]
public static MyPoint Create(SqlSingle x, SqlSingle y)
{
return new MyPoint(x, y);
}
// Snip Parse method, Null property, etc.
}
The main type would look something like this:
[Serializable]
[SqlUserDefinedType(Format.UserDefined, IsByteOrdered = true, MaxByteSize = ...)]
public struct MyUdt
{
// Make sure to initialize this in any constructors/builders
private IList<MyPoint> points;
[SqlMethod(OnNullCall = false, IsDeterministic = true, IsPrecise = true)]
public MyPoint GetPoint(int index)
{
if ((index >= 0) && (index < points.Count))
{
return points[index];
}
return MyPoint.Null;
}
public int Count
{
get { return points.Count; }
}
}
If you need SQL to be able to get a sequence of all the points, then you can add an enumerable method to the sequence type as well:
[SqlFunction(FillRowMethodName = "FillPointRow",
TableDefinition = "[X] real, [Y] real")]
public static IEnumerable GetPoints(MyUdt obj)
{
return obj.Points;
}
public static void FillPointRow(object obj, out SqlSingle x, out SqlSingle y)
{
MyPoint point = (MyPoint)obj;
x = point.X;
y = point.Y;
}
You might think that it's possible to use an IEnumerable<T> and/or use an instance method instead of a static one, but don't even bother trying, it doesn't work.
So the way you can use the resulting type in SQL Server is:
DECLARE #UDT MyUdt
SET #UDT = <whatever>
-- Will show the number of points
SELECT #UDT.Count
-- Will show the binary representation of the second point
SELECT #UDT.GetPoint(1) AS [Point]
-- Will show the X and Y values for the second point
SELECT #UDT.GetPoint(1).X AS [X], #UDT.GetPoint(1).Y AS [Y]
-- Will show all the points
SELECT * FROM dbo.GetPoints(#UDT)
Hope this helps get you on the right track. UDTs can get pretty complicated to manage when they're dealing with list/sequence data.
Also note that you'll obviously need to add serialization methods, builder methods, aggregate methods, and so on. It can be quite an ordeal; make sure that this is actually the direction you want to go in, because once you start adding UDT columns it can be very difficult to make changes if you realize that you made the wrong choice.
Lists as you describe are usually normalized - that is, stored in separate tables with one row per item - rather than trying to cram them into a single column. If you can share more info on what you are trying to accomplish, maybe we can offer more assistance.
Edit - suggested table structure:
-- route table--
route_id int (PK)
route_length int (or whatever)
route_info <other fields as needed>
-- waypoint table --
route_id int (PK)
sequence tinyint (PK)
lat decimal(9,6)
lon decimal(9,6)
waypoint_info <other fields as needed>