Related
I have a simple data table with x and y values, something like this:
x y
-10 -0.505
-9 -0.422
-8 -0.335
-7 -0.243
-6 -0.148
-5 -0.051
-4 0.046
-3 0.144
-2 0.242
-1 0.34
0 0.539
1 0.658
2 0.773
3 0.716
4 0.8
5 0.88
6 0.952
7 1.016
8 1.071
9 1.116
10 1.15
The x step size as well as the min and max values might be different.
I am looking for a built-in functionality to interpolate between these values.
So I need a function which takes the x values and returns the corresponding y value. When there is no exact match, I need the function to linearly interpolate between the two closest values.
Of course I could write my own function but I feel like there might be an easy solution, maybe even built-in in Dart.
I appreciate any help.
Thanks & cheers
Tobi
You can use the SplayTreeMap<double, double> to store your values. The key represents the range to search within.Then use an interpolation function as shown below:
var kEfficiencyMotorsFullLoad = SplayTreeMap<double, double>.from({
0.75: .825,
1.1: .841,
1.5: .853,
2.2: .867,
3.0: .877,
4.0: .886,
5.5: .896,
7.5: .904,
11.0: .914,
15.0: .921,
18.5: .926,
22.0: .930,
30.0: .936,
});
double linearInterpolate(double target, SplayTreeMap<double, double> values) {
if (values.containsKey(target)) return values[target]!;
double? xa = values.lastKeyBefore(target);
double? xb = values.firstKeyAfter(target);
//very small key
if (xa != null && xb == null) return values[xa]!;
//very large key
if (xa == null && xb != null) return values[xb]!;
//strange error
if (xa == null && xb == null)
throw Exception(
"number was not found in the SplayTreeMap, check if it is not empty");
double ya = values[xa] ?? 0;
double yb = values[xb] ?? 0;
return ya + ((yb - ya) * ((target - xa!) / (xb! - xa)));
}
There might be some package on pub.dev that does it already, but I'd use a SplayTreeMap and its lastKeyBefore and firstKeyAfter methods to find the surrounding points and then interpolate between them. For example:
import 'dart:collection' show SplayTreeMap;
import 'dart:math' show Point;
/// Returns the y-coordinate for the specified x-coordinate on the line defined
/// by two given points.
double _interpolate(Point<double> p0, Point<double> p1, double x) {
// y - y0 = m * (x - x0)
var m = (p1.y - p0.y) / (p1.x - p0.x);
return m * (x - p0.x) + p0.y;
}
class InterpolatingMap {
final SplayTreeMap<double, double> _data;
InterpolatingMap(Map<double, double> data)
: _data = SplayTreeMap<double, double>.of(data);
double operator [](double x) {
var value = _data[x];
if (value != null) {
return value;
}
if (_data.isEmpty) {
throw StateError('InterpolatingMap is empty');
}
double? lower = _data.lastKeyBefore(x);
double? upper = _data.firstKeyAfter(x);
assert(lower != null || upper != null);
double x0;
double x1;
if (lower == null) {
// `x` is to the left of the left-most data point. Extrapolate from the
// first two entries.
x0 = upper!;
x1 = _data.firstKeyAfter(upper) ?? x0;
} else if (upper == null) {
// `x` is to the right of the right-most data point. Extrapolate from the
// last two entries.
x1 = lower;
x0 = _data.lastKeyBefore(lower) ?? x1;
} else {
x0 = lower;
x1 = upper;
}
return _interpolate(
Point<double>(x0, _data[x0]!),
Point<double>(x1, _data[x1]!),
x,
);
}
}
void main() {
var interpolatingMap = InterpolatingMap({
0: 1,
1: 2,
2: 1,
});
print(interpolatingMap[-1]); // Prints: 0
print(interpolatingMap[0]); // Prints: 1
print(interpolatingMap[0.25]); // Prints: 1.25
print(interpolatingMap[0.5]); // Prints: 1.5
print(interpolatingMap[0.75]); // Prints: 1.75
print(interpolatingMap[1]); // Prints: 2
print(interpolatingMap[1.5]); // Prints: 1.5
print(interpolatingMap[3]); // Prints: 0
}
Note that InterpolatingMap in the above implementation is a bit of a misnomer since it also will extrapolate values outside the data range. (It should be trivial to make it throw an exception if you want to disable extrapolation, however.) It also doesn't implement the Map interface (which is left as an exercise for readers who care about that).
I would probably use binary search to find the matching range, then interpolate from that.
You can use the lowerBound method from package:collection to find the largest element less then or equal to the element you search for.
Something like:
import"package:collection/collection.dart";
double interpolate(List<num> keyPoints, List<num> values, num x) {
if (keyPoints.length < 2) {
throw ArgumentError.value(keyPoints, "keyPoints",
"Needs at least two points to interpolate");
}
if (keyPoints.length != values.length) {
throw ArgumentError.value(values, "values",
"Must have the same number of elements as the key points");
}
var p = keyPoints.lowerBound(x);
if (p > keyPoints.length - 2) p = keyPoints.length - 2;
var startPosition = keyPoints[p];
var endPosition = keyPoints[p + 1];
var startValue = values[p];
var endValue = values[p + 1];
return (x - startPosition) / (endPosition - startPosition) * (endValue - startValue);
}
This will interpolate the value when x is between two key-points, and extrapolate the first or last range if the x value is outside the key-point range.
Im using a very big BitmapData as a pathing map for my platformer game, however I only use pixels for 4 particular values, instead of, well 4294967295.
Would converting this Bitmapdata as 2 2D Vectors of Boolean save me some memory ?
And if it does, what about performance, would it be faster or slower to do something like:
MapGetPixel(x:int, y:int):int
{
return MapBoolFirst[x][y] + MapBoolSecond[x][y]*2;
}
instead of the bitmapdata class getPixel32(x:int, y:int):uint ?
In short im looking for a way to reduce the size and/or optimize my 4 colors bitmapdata.
Edit :
Using my boolean method apparently consumes 2 times more memory than the bitmapdata one.
I guess a boolean takes more than one bit in memory, else that would be too easy. So im thinking about bitshifting ints and thus have an int store the value for several pixels, but im not sure about this…
Edit 2 :
Using int bitshifts I can manage the data of 16 pixels into a single int, this trick should work to save some memory, even if it'll probably hit performance a bit.
Bitshifting will be the most memory-optimized way of handling it. Performance wise, that shouldn't be too big of an issue unless you need to poll a lot of asks each frame. The issue with AS is that booleans are 4bits :(
As I see it you can handle it in different cases:
1) Create a lower res texture for the hit detections, usually it is okay to shrink it 4 times (256x256 --> 64x64)
2) Use some kind of technique of saving that data into some kind of storage (bool is easiest, but if that is too big, then you need to find another solution for it)
3) Do the integer-solution (I haven't worked with bit-shifting before, so I thought it would be a fun challenge, here's the result of that)
And that solution is way smaller than the one used for boolean, and also way harder to understand :/
public class Foobar extends MovieClip {
const MAX_X:int = 32;
const MAX_Y:int = 16;
var _itemPixels:Vector.<int> = new Vector.<int>(Math.ceil(MAX_X * MAX_Y / 32));
public function Foobar() {
var pre:Number = System.totalMemory;
init();
trace("size=" + _itemPixels.length);
for (var i = 0; i < MAX_Y; ++i) {
for (var j = 0; j < MAX_X; ++j) {
trace("item=" + (i*MAX_X+j) + "=" + isWalkablePixel(j, i));
}
}
trace("memory preInit=" + pre);
trace("memory postInit=" + System.totalMemory);
}
public function init() {
var MAX_SIZE:int = MAX_X * MAX_Y;
var id:int = 0;
var val:int = 0;
var b:Number = 0;
for(var y=0; y < MAX_Y; ++y) {
for (var x = 0; x < MAX_X; ++x) {
b = Math.round(Math.random()); //lookup the pixel from some kind of texture or however you expose the items
if (b == 1) {
id = Math.floor((y * MAX_X + x) / 32);
val = _itemPixels[id];
var it:uint = (y * MAX_X + x) % 32;
b = b << it;
val |= b;
_itemPixels[id] = val;
}
}
}
}
public function isWalkablePixel(x, y):Boolean {
var val:int = _itemPixels[Math.floor((y * MAX_X + x) / 32)];
var it:uint = 1 << (y * MAX_X + x) % 32;
return (val & it) != 0;
}
}
One simple improvement is to use a ByteArray instead of BitmapData. That means each "pixel" only takes up 1 byte instead of 4. This is still a bit wasteful since you're only needing 2 bits per pixel and not 8, but it's a lot less than using BitmapData. It also gives you some "room to grow" without having to change anything significant later if you need to store more than 4 values per pixel.
ByteArray.readByte()/ByteArray.writeByte() works with integers, so it's really convenient to use. Of course, only the low 8 bits of the integer is written when calling writeByte().
You set ByteArray.position to the point (0-based index) where you want the next read or write to start from.
To sum up: Think of the ByteArray as a one dimensional Array of integers valued 0-255.
Here are the results, I was using an imported 8 bit colored .png by the way, not sure if it changes anything when he gets converted into a
BitmapData.
Memory usage :
BitmapData : 100%
Double Boolean vectors : 200%
Int Bitshifting : 12%
So int bitshifting win hands down, it works pretty much the same way as hexadecimal color components, however in that case I store 16 components (pixel values in 2 bits) not the 4 ARGB:
var pixels:int = -1;// in binary full of 1
for (var i:int = 0; i < 16; i++)
trace("pixel " + (i + 1) +" value : " + (pixels >> i * 2 & 3));
outputs as expected :
"pixel i value : 3"
I'm not too good with C++, however; my code compiled, but the function crashes my program, the below is a short sum-up of the code; it's not complete, however the function and call is there.
void rot13(char *ret, const char *in);
int main()
{
char* str;
MessageBox(NULL, _T("Test 1; Does get here!"), _T("Test 1"), MB_OK);
rot13(str, "uryyb jbeyq!"); // hello world!
/* Do stuff with char* str; */
MessageBox(NULL, _T("Test 2; Doesn't get here!"), _T("Test 2"), MB_OK);
return 0;
}
void rot13(char *ret, const char *in){
for( int i=0; i = sizeof(in); i++ ){
if(in[i] >= 'a' && in[i] <= 'm'){
// Crashes Here;
ret[i] += 13;
}
else if(in[i] > 'n' && in[i] <= 'z'){
// Possibly crashing Here too?
ret[i] -= 13;
}
else if(in[i] > 'A' && in[i] <= 'M'){
// Possibly crashing Here too?
ret[i] += 13;
}
else if(in[i] > 'N' && in[i] <= 'Z'){
// Possibly crashing Here too?
ret[i] -= 13;
}
}
}
The function gets to "Test 1; Does get Here!" - However it doesn't get to "Test 2; Doesn't get here!"
Thank you in advanced.
-Nick Daniels.
str is uninitialised and it is being dereferenced in rot13, causing the crash. Allocate memory for str before passing to rot13() (either on the stack or dynamically):
char str[1024] = ""; /* Large enough to hold string and initialised. */
The for loop inside rot13() is also incorrect (infinte loop):
for( int i=0; i = sizeof(in); i++ ){
change to:
for(size_t i = 0, len = strlen(in); i < len; i++ ){
You've got several problems:
You never allocate memory for your output - you never initialise the variable str. This is what's causing your crash.
Your loop condition always evaluates to true (= assigns and returns the assigned value, == tests for equality).
Your loop condition uses sizeof(in) with the intention of getting the size of the input string, but that will actually give you the size of the pointer. Use strlen instead.
Your algorithm increases or decreases the values in the return string by 13. The values you place in the output string are +/- 13 from the initial values in the output string, when they should be based on the input string.
Your algorithm doesn't handle 'A', 'n' or 'N'.
Your algorithm doesn't handle any non-alphabetic characters, yet the test string you use contains two.
What would be the best way to compare a pattern with a set of strings, one by one, while rating the amount with which the pattern matches each string? In my limited experience with regex, matching strings with patterns using regex seems to be a pretty binary operation...no matter how complicated the pattern is, in the end, it either matches or it doesn't. I am looking for greater capabilities, beyond just matching. Is there a good technique or algorithm that relates to this?
Here's an example:
Lets say I have a pattern foo bar and I want to find the string that most closely matches it out of the following strings:
foo for
foo bax
foo buo
fxx bar
Now, none of these actually match the pattern, but which non-match is the closest to being a match? In this case, foo bax would be the best choice, since it matches 6 out of the 7 characters.
Apologies if this is a duplicate question, I didn't really know what exactly to search for when I looked to see if this question already exists.
This one works, I checked with Wikipedia example distance between "kitten" and "sitting" is 3
public class LevenshteinDistance {
public static final String TEST_STRING = "foo bar";
public static void main(String ...args){
LevenshteinDistance test = new LevenshteinDistance();
List<String> testList = new ArrayList<String>();
testList.add("foo for");
testList.add("foo bax");
testList.add("foo buo");
testList.add("fxx bar");
for (String string : testList) {
System.out.println("Levenshtein Distance for " + string + " is " + test.getLevenshteinDistance(TEST_STRING, string));
}
}
public int getLevenshteinDistance (String s, String t) {
if (s == null || t == null) {
throw new IllegalArgumentException("Strings must not be null");
}
int n = s.length(); // length of s
int m = t.length(); // length of t
if (n == 0) {
return m;
} else if (m == 0) {
return n;
}
int p[] = new int[n+1]; //'previous' cost array, horizontally
int d[] = new int[n+1]; // cost array, horizontally
int _d[]; //placeholder to assist in swapping p and d
// indexes into strings s and t
int i; // iterates through s
int j; // iterates through t
char t_j; // jth character of t
int cost; // cost
for (i = 0; i<=n; i++) {
p[i] = i;
}
for (j = 1; j<=m; j++) {
t_j = t.charAt(j-1);
d[0] = j;
for (i=1; i<=n; i++) {
cost = s.charAt(i-1)==t_j ? 0 : 1;
// minimum of cell to the left+1, to the top+1, diagonally left and up +cost
d[i] = Math.min(Math.min(d[i-1]+1, p[i]+1), p[i-1]+cost);
}
// copy current distance counts to 'previous row' distance counts
_d = p;
p = d;
d = _d;
}
// our last action in the above loop was to switch d and p, so p now
// actually has the most recent cost counts
return p[n];
}
}
That's an interesting question! The first thing that came to mind is that the way regular expressions are matched is by building a DFA. If you had direct access to the DFA that was built for a given regex (or just built it yourself!) you could run the input measure the distance from the last state you transitioned to and an accept state, using a shortest path as a measure of how close it was to being accepted, but I'm not aware of any libraries that would let you do that easily and even this measure probably wouldn't exactly map onto your intuition in a number of cases.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Not so long ago I was in an interview, that required solving two very interesting problems. I'm curious how would you approach the solutions.
Problem 1 :
Product of everything except current
Write a function that takes as input two integer arrays of length len, input and index, and generates a third array, result, such that:
result[i] = product of everything in input except input[index[i]]
For instance, if the function is called with len=4, input={2,3,4,5}, and index={1,3,2,0}, then result will be set to {40,24,30,60}.
IMPORTANT: Your algorithm must run in linear time.
Problem 2 : ( the topic was in one of Jeff posts )
Shuffle card deck evenly
Design (either in C++ or in C#) a class Deck to represent an ordered deck of cards, where a deck contains 52 cards, divided in 13 ranks (A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K) of the four suits: spades (?), hearts (?), diamonds (?) and clubs (?).
Based on this class, devise and implement an efficient algorithm to shuffle a deck of cards. The cards must be evenly shuffled, that is, every card in the original deck must have the same probability to end up in any possible position in the shuffled deck.
The algorithm should be implemented in a method shuffle() of the class Deck:
void shuffle()
What is the complexity of your algorithm (as a function of the number n of cards in the deck)?
Explain how you would test that the cards are evenly shuffled by your method (black box testing).
P.S. I had two hours to code the solutions
First question:
int countZeroes (int[] vec) {
int ret = 0;
foreach(int i in vec) if (i == 0) ret++;
return ret;
}
int[] mysticCalc(int[] values, int[] indexes) {
int zeroes = countZeroes(values);
int[] retval = new int[values.length];
int product = 1;
if (zeroes >= 2) { // 2 or more zeroes, all results will be 0
for (int i = 0; i > values.length; i++) {
retval[i] = 0;
}
return retval;
}
foreach (int i in values) {
if (i != 0) product *= i; // we have at most 1 zero, dont include in product;
}
int indexcounter = 0;
foreach(int idx in indexes) {
if (zeroes == 1 && values[idx] != 0) { // One zero on other index. Our value will be 0
retval[indexcounter] = 0;
}
else if (zeroes == 1) { // One zero on this index. result is product
retval[indexcounter] = product;
}
else { // No zeros. Return product/value at index
retval[indexcounter] = product / values[idx];
}
indexcouter++;
}
return retval;
}
Worst case this program will step through 3 vectors once.
For the first one, first calculate the product of entire contents of input, and then for every element of index, divide the calculated product by input[index[i]], to fill in your result array.
Of course I have to assume that the input has no zeros.
Tnilsson, great solution ( because I've done it the exact same way :P ).
I don't see any other way to do it in linear time. Does anybody ? Because the recruiting manager told me, that this solution was not strong enough.
Are we missing some super complex, do everything in one return line, solution ?
A linear-time solution in C#3 for the first problem is:-
IEnumerable<int> ProductExcept(List<int> l, List<int> indexes) {
if (l.Count(i => i == 0) == 1) {
int singleZeroProd = l.Aggregate(1, (x, y) => y != 0 ? x * y : x);
return from i in indexes select l[i] == 0 ? singleZeroProd : 0;
} else {
int prod = l.Aggregate(1, (x, y) => x * y);
return from i in indexes select prod == 0 ? 0 : prod / l[i];
}
}
Edit: Took into account a single zero!! My last solution took me 2 minutes while I was at work so I don't feel so bad :-)
Product of everything except current in C
void product_except_current(int input[], int index[], int out[],
int len) {
int prod = 1, nzeros = 0, izero = -1;
for (int i = 0; i < len; ++i)
if ((out[i] = input[index[i]]) != 0)
// compute product of non-zero elements
prod *= out[i]; // ignore possible overflow problem
else {
if (++nzeros == 2)
// if number of zeros greater than 1 then out[i] = 0 for all i
break;
izero = i; // save index of zero-valued element
}
//
for (int i = 0; i < len; ++i)
out[i] = nzeros ? 0 : prod / out[i];
if (nzeros == 1)
out[izero] = prod; // the only non-zero-valued element
}
Here's the answer to the second one in C# with a test method. Shuffle looks O(n) to me.
Edit: Having looked at the Fisher-Yates shuffle, I discovered that I'd re-invented that algorithm without knowing about it :-) it is obvious, however. I implemented the Durstenfeld approach which takes us from O(n^2) -> O(n), really clever!
public enum CardValue { A, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten, J, Q, K }
public enum Suit { Spades, Hearts, Diamonds, Clubs }
public class Card {
public Card(CardValue value, Suit suit) {
Value = value;
Suit = suit;
}
public CardValue Value { get; private set; }
public Suit Suit { get; private set; }
}
public class Deck : IEnumerable<Card> {
public Deck() {
initialiseDeck();
Shuffle();
}
private Card[] cards = new Card[52];
private void initialiseDeck() {
for (int i = 0; i < 4; ++i) {
for (int j = 0; j < 13; ++j) {
cards[i * 13 + j] = new Card((CardValue)j, (Suit)i);
}
}
}
public void Shuffle() {
Random random = new Random();
for (int i = 0; i < 52; ++i) {
int j = random.Next(51 - i);
// Swap the cards.
Card temp = cards[51 - i];
cards[51 - i] = cards[j];
cards[j] = temp;
}
}
public IEnumerator<Card> GetEnumerator() {
foreach (Card c in cards) yield return c;
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() {
foreach (Card c in cards) yield return c;
}
}
class Program {
static void Main(string[] args) {
foreach (Card c in new Deck()) {
Console.WriteLine("{0} of {1}", c.Value, c.Suit);
}
Console.ReadKey(true);
}
}
In Haskell:
import Array
problem1 input index = [(left!i) * (right!(i+1)) | i <- index]
where left = scanWith scanl
right = scanWith scanr
scanWith scan = listArray (0, length input) (scan (*) 1 input)
Vaibhav, unfortunately we have to assume, that there could be a 0 in the input table.
Second problem.
public static void shuffle (int[] array)
{
Random rng = new Random(); // i.e., java.util.Random.
int n = array.length; // The number of items left to shuffle (loop invariant).
while (n > 1)
{
int k = rng.nextInt(n); // 0 <= k < n.
n--; // n is now the last pertinent index;
int temp = array[n]; // swap array[n] with array[k] (does nothing if k == n).
array[n] = array[k];
array[k] = temp;
}
}
This is a copy/paste from the wikipedia article about the Fisher-Yates shuffle. O(n) complexity
Tnilsson, I agree that YXJuLnphcnQ solution is arguably faster, but the idee is the same. I forgot to add, that the language is optional in the first problem, as well as int the second.
You're right, that calculationg zeroes, and the product int the same loop is better. Maybe that was the thing.
Tnilsson, I've also uset the Fisher-Yates shuffle :). I'm very interested dough, about the testing part :)
Trilsson made a separate topic about the testing part of the question
How to test randomness (case in point - Shuffling)
very good idea Trilsson:)
YXJuLnphcnQ, that's the way I did it too. It's the most obvious.
But the fact is, that if you write an algorithm, that just shuffles all the cards in the collection one position to the right every time you call sort() it would pass the test, even though the output is not random.
Shuffle card deck evenly in C++
#include <algorithm>
class Deck {
// each card is 8-bit: 4-bit for suit, 4-bit for value
// suits and values are extracted using bit-magic
char cards[52];
public:
// ...
void shuffle() {
std::random_shuffle(cards, cards + 52);
}
// ...
};
Complexity: Linear in N. Exactly 51 swaps are performed. See http://www.sgi.com/tech/stl/random_shuffle.html
Testing:
// ...
int main() {
typedef std::map<std::pair<size_t, Deck::value_type>, size_t> Map;
Map freqs;
Deck d;
const size_t ntests = 100000;
// compute frequencies of events: card at position
for (size_t i = 0; i < ntests; ++i) {
d.shuffle();
size_t pos = 0;
for(Deck::const_iterator j = d.begin(); j != d.end(); ++j, ++pos)
++freqs[std::make_pair(pos, *j)];
}
// if Deck.shuffle() is correct then all frequencies must be similar
for (Map::const_iterator j = freqs.begin(); j != freqs.end(); ++j)
std::cout << "pos=" << j->first.first << " card=" << j->first.second
<< " freq=" << j->second << std::endl;
}
As usual, one test is not sufficient.