How can functions be named to clearly reflect that they follow a declarative paradigm?
Context: I've recently started working on creating libraries that work in a declarative manner but I'm having a hard time coming up with a naming convention that reflects it. In the past I've created imperative functions with names like createThing, but I'm finding it difficult to succinctly convey the idea of "do what is necessary to return a Thing that looks like ____" in a function name.
Ideally, I'd like to follow a formal standard or established naming convention. Otherwise, I'm hoping to at least find some guidance from pre-existing codebases.
Given your concerns to have a succinct function name, I would look into whether your createThing function does too much and split it into a few smaller chunks (this is heavily influenced by C# syntax):
var yourThing = new Thing()
.with(new PropertyA()).thenWith(new DependentPropertyOfA()) // digress a bit
.with(new PropertyB()) // back to main thread here
.withPieceOfLogic((parameter1, parameter2) => {define some logic here}) // so you can potentially can swap implementations as well
.create();
Here I'm aiming at something along the lines of FluentInterface. This might get you to the aesthetics you're looking for.
One thing to bear in mind with this approach, this chaining makes it heavily linear and might not work well if you need to make detours from defining your main object a lot.
Another few examples to draw inspiration from:
https://www.entityframeworktutorial.net/efcore/fluent-api-in-entity-framework-core.aspx
https://momentjs.com/docs/.
https://jasmine.github.io/tutorials/your_first_suite
In my experience there is no canonical naming scheme for primary functions in declarative frameworks, but there are many prominent examples you could draw inspiration from.
Methods associated with the FluentInterface style are often prefixed 'with'. e.g.
new HTTPClient()
.withProxy(new Proxy('localhost', 8080))
.withTimeOut(Duration.of("30s"))
.withRequestHeaders(
new Headers()
.with('User-Agent', 'FluidClient')
);
See https://martinfowler.com/bliki/FluentInterface.html
Some FluentInterface designs do away with function name prefixes and just name functions directly after the declarative element they represent. An example from JOOQ:(https://www.jooq.org/doc/3.12/manual/getting-started/use-cases/jooq-as-a-standalone-sql-builder/)
String sql = create.select(field("BOOK.TITLE"), field("AUTHOR.FIRST_NAME"),
field("AUTHOR.LAST_NAME"))
.from(table("BOOK"))
.join(table("AUTHOR"))
.on(field("BOOK.AUTHOR_ID").eq(field("AUTHOR.ID")))
.where(field("BOOK.PUBLISHED_IN").eq(1948))
.getSQL();
This has the benefit of making a chain of imperative invocations read like a declarative DSL. However eschewing naming conventions for methods can make the source for the builder class less readable.
The above examples are of builders being used to construct objects where encapsulated state is used to represent the concept being declared. Some OO frameworks reduce this further so that the code is composed solely in terms of constructors for 'fine-grained' objects. In most c-derived languages constructors are required to be named after the type they are associated with.
An example of a UI widget tree being declared in Flutter (from https://flutter.dev/docs/development/ui/widgets-intro#using-material-components):
return Scaffold(
appBar: AppBar(
leading: IconButton(
icon: Icon(Icons.menu),
tooltip: 'Navigation menu',
onPressed: null,
),
title: Text('Example title'),
actions: <Widget>[
IconButton(
icon: Icon(Icons.search),
tooltip: 'Search',
onPressed: null,
),
],
),
// body is the majority of the screen.
body: Center(
child: Text('Hello, world!'),
),
floatingActionButton: FloatingActionButton(
tooltip: 'Add', // used by assistive technologies
child: Icon(Icons.add),
onPressed: null,
),
);
Related
I'm writing a jade-like language that will transpile to html. Here's how a tag definition looks like:
section #mainWrapper .container
this transpiles to:
<section id="mainWrapper" class="container">
Should the lexer tell class and id apart or should it only spit out the special characters with names?
In other words, should the token array look like this:
[
{type: 'tag', value: 'section'},
{type: 'id', value: 'mainWrapper'},
{type: 'class', value: 'container'}
]
and then the parser just assembles these into a tree
or should the lexer be very primitive and only return matched strings, and then the parser takes care of distinguishing them?:
[
{type: 'name', value: 'section'},
{type: 'name', value: '#mainWrapper'},
{type: 'name', value: '.container'}
]
As a rule of thumb, tokenisers shouldn't parse and parser shouldn't tokenise.
In this concrete case, it seems to me unlikely that every unadorned use of a name-like token -- such as section -- would necessarily be a tag. It's more likely that section is a tag because of its syntactic context. If the tokeniser attempts to mark it as a tag, then the tokeniser is tracking syntactic context, which means that it is parsing.
The sigils . and # are less clear-cut. You could consider them single-character tokens (which the syntax will insist be followed by a name) or you might consider them to be the first character of a special type of string. Some things that might sway you one way or the other:
Can the sigil be separated from the following name by whitespace? (# mainWrapper). If so, the sigil is probably a token.
Is the lexical form of a class or id different from a name? Think about the use of special characters, for example. If you can't accurately recognise the object without knowing what sigil (if any) preceded it, then it might better be considered as a single token.
Are there other ways to represent class names. For example, how do you represent multiple classes? Some possibilities off the top of my head:
#classA #classB
#(classA classB)
#"classA classB"
class = "classA classB"
If any of the options other than the first one are valid, you probably should just make # a token. But correct handling of the quoted strings might generate other challenges. In particular, it could require retokenising the contents of the string literal, which would be a violation of the heuristic that parsers shouldn't tokenise. Fortunately, these aren't absolute rules; retokenisation is sometimes necessary. But keep it to a minimum.
The separation into lexical and syntactic analysis should not be a strait-jacket. It's a code organization technique intended to make the individual parts easier to write, understand, debug and document. It is often (but not always) the case that the separation makes it easier for users of your language to understand the syntax, which is also important. But it is not appropriate for every parsing task, and the precise boundary is flexible (but not porous: you can put the boundary where it is most convenient but once it's placed, don't try to shove things through the cracks.)
If you find that this separation of concerns too difficult for your project, you should either reconsider your language design or try scannerless parsing.
In Chisel iotesters, we pass a factory that creates a Chisel design to the tester, e.g. () => new DUT, as follows:
"Test" should "simulate" in {
chisel3.iotesters.Driver.execute(arguments, () => new DUT) { c => new MyPeekPokeTester(c) } should be (true)
}
If I have many tests and a large design, there's design elaboration that happens for every test resulting in a long runtime. Since for many tests, it is possibly exact same design being passed, a logical question comes up - is there a way to reuse the elaborated design (DUT.fir or DUT.v depending on the backend) in multiple tests? Given a reset is called in the beginning of every test, it shouldn't incur functional issues.
I'd suggest building a PeekPokeTester that aggregates a number of testers. Something like
class MyMegaPeekPokeTester(c: MyDut) extends PeekPokeTester(c) {
new MyPeekPokeTester1(c) &&
new MyPeekPokeTester2(c) &&
...
new MyPeekPokeTesterN(c)
}
There's various ways you could sugar this up (putting the classes into a list, calling reset between them programmatically, etc).
There's an effort to refactor and modernize the testers and this issue is under consideration. One complication is that PeekPokeTesters require access to the instance of the dut in order to provide type safe access to the IO. It is difficult to serialize or otherwise preserve this information.
It is my first time to use yii and unlike my old programming style, i notice that it use relationship automatically in its model.
public function relations()
{
return array(
'author'=>array(self::BELONGS_TO, 'User', 'author_id'),
'categories'=>array(self::MANY_MANY, 'Category',
'tbl_post_category(post_id, category_id)'),
);
}
I'm not used in doing this MySQL relationship. my old programming habit is connecting/manipulating the data to the php program itself.. To clarify my question, is this yii model relationship important? if i dont use this method, will i encounter problems?
Yii relations are very useful and if you work with it you will see that it will make you do less coding and make your code more readable.
while it is so much used in Yii applications, if you don't use relations, you won't get into any trouble, it is supposed to help you code and develop faster.
like if you looked at Yii blog, you have relation between Post model and Comments model, and you could go like this:
$post = Post::model()->findByPk( $id ); // find one post
$allCommentsRelated = $post->comments; // just one line for all search query and instanciating models
BTW in relations, there are two type of loading:
lazy loading (this is default mechanism)
eager loading
you have to know your scenario, and choose one that suites that scenario best
I am re-writing some code to make functional changes and I am stuck at a situation where either I will need to overload a function to accommodate two or three types of parameters (but performing almost identical operations on them) OR use one function with a lot of parameters. Now I am going with the latter option, and I just wanted to know specific disadvantages (if any) of using a function with a lot of parameters (and when I say lot, I mean 15).
I am looking for a general answer, nothing language specific, so I am not mentioning the language here, but just for information, I am using C#.
Thanks
Rishi
The problem with a lot of parameters is that at the place where you call the code it can be difficult to see what the parameters mean:
// Uhh... what?
run(x, y, max_x, true, false, dx * 2, range_start, range_end, 0.01, true);
Some languages solve this by allowing named parameters and optional parameters with sensible defaults.
Another approach is to put your parameters in a parameter object with named members and then pass that single object as the argument to your function. This is a refactoring appraoach called Introduce Parameter Object.
You may also find it useful to put one group of related parameters that belong together into one class, and another group of parameters into a different class.
you may try to think as the one who will use the method.
The best is to have a comprehensible use of each arguments.
if all arguments are not used in all cases, you can :
use optional parameters (c# 4 for example support that)
use struct or class to hold parameters and only fill required properties
refactor your code. I don't know what your code does, but it seems to my eyes a huge number of parameters
If you're trying to write your code "the functional way" you might find "Currying" useful, and create meaningful functor objects that are initialized with just a couple of parameters. If a function takes a lot of parameters, their list may (or should) usually be divided into meaningful chunks, and currying should form a chain of functions with meaningful intent.
So instead of (example of this answer):
run(x, y, max_x, true, false, dx * 2, range_start, range_end, 0.01, true);
you might use
// initialize functors
run_in_userbox = run(x, y, max_x);
run_with_bounds = run_in_userbox(true, false);
iterate_within_bounds = run_with_bounds(dx * 2, range_start, range_end, 0.01);
result = iterate(true); //computation only starts here
I don't know if C# supports this, but that's how the problem is usually solved in functional languages.
The way I normally handle this is to have very small separate methods for each signature needed, but have them call private methods to do the actual work, which as you said is pretty much identical between the use cases.
While building by DAL Repository, I stumbled upon a concept called Pipes and Filters. I read about it here, here and saw a screencast from here. I am still not sure how to go about implementing this pattern. Theoretically all sounds good , but how do we really implement this in an enterprise scenario?
I will appreciate, if you have any resources,tips or examples ro explanation for this pattern in context to the data mappers/ORM mentioned in the question.
Thanks in advance!!
Ultimately, LINQ on IEnumerable<T> is a pipes and filters implementation. IEnumerable<T> is a streaming API - meaning that data is lazily returns as you ask for it (via iterator blocks), rather than loading everything at once, and returning a big buffer of records.
This means that your query:
var qry = from row in source // IEnumerable<T>
where row.Foo == "abc"
select new {row.ID, row.Name};
is:
var qry = source.Where(row => row.Foo == "abc")
.Select(row = > new {row.ID, row.Name});
as you enumerate over this, it will consume the data lazily. You can see this graphically with Jon Skeet's Visual LINQ. The only things that break the pipe are things that force buffering; OrderBy, GroupBy, etc. For high volume work, Jon and myself worked on Push LINQ for doing aggregates without buffering in such scenarios.
IQueryable<T> (exposed by most ORM tools - LINQ-to-SQL, Entity Framework, LINQ-to-NHibernate) is a slightly different beast; because the database engine is going to do most of the heavy lifting, the chances are that most of the steps are already done - all that is left is to consume an IDataReader and project this to objects/values - but that is still typically a pipe (IQueryable<T> implements IEnumerable<T>) unless you call .ToArray(), .ToList() etc.
With regard to use in enterprise... my view is that it is fine to use IQueryable<T> to write composable queries inside the repository, but they shouldn't leave the repository - as that would make the internal operation of the repository subject to the caller, so you would be unable to properly unit test / profile / optimize / etc. I've taken to doing clever things in the repository, but return lists/arrays. This also means my repository stays unaware of the implementation.
This is a shame - as the temptation to "return" IQueryable<T> from a repository method is quite large; for example, this would allow the caller to add paging/filters/etc - but remember that they haven't actually consumed the data yet. This makes resource management a pain. Also, in MVC etc you'd need to ensure that the controller calls .ToList() or similar, so that it isn't the view that is controlling data access (otherwise, again, you can't unit test the controller properly).
A safe (IMO) use of filters in the DAL would be things like:
public Customer[] List(string name, string countryCode) {
using(var ctx = new CustomerDataContext()) {
IQueryable<Customer> qry = ctx.Customers.Where(x=>x.IsOpen);
if(!string.IsNullOrEmpty(name)) {
qry = qry.Where(cust => cust.Name.Contains(name));
}
if(!string.IsNullOrEmpty(countryCode)) {
qry = qry.Where(cust => cust.CountryCode == countryCode);
}
return qry.ToArray();
}
}
Here we've added filters on-the-fly, but nothing happens until we call ToArray. At this point, the data is obtained and returned (disposing the data-context in the process). This can be fully unit tested. If we did something similar but just returned IQueryable<T>, the caller might do something like:
var custs = customerRepository.GetCustomers()
.Where(x=>SomeUnmappedFunction(x));
And all of a sudden our DAL starts failing (cannot translate SomeUnmappedFunction to TSQL, etc). You can still do a lot of interesting things in the repository, though.
The only pain point here is that it might push you to have a few overloads to support different calling patterns (with/without paging, etc). Until optional/named parameters arrives, I find the best answer here is to use extension methods on the interface; that way, I only need one concrete repository implementation:
class CustomerRepository {
public Customer[] List(
string name, string countryCode,
int? pageSize, int? pageNumber) {...}
}
interface ICustomerRepository {
Customer[] List(
string name, string countryCode,
int? pageSize, int? pageNumber);
}
static class CustomerRepositoryExtensions {
public static Customer[] List(
this ICustomerRepository repo,
string name, string countryCode) {
return repo.List(name, countryCode, null, null);
}
}
Now we have virtual overloads (as extension methods) on ICustomerRepository - so our caller can use repo.List("abc","def") without having to specify the paging.
Finally - without LINQ, using pipes and filters becomes a lot more painful. You'll be writing some kind of text based query (TSQL, ESQL, HQL). You can obviously append strings, but it isn't very "pipe/filter"-ish. The "Criteria API" is a bit better - but not as elegant as LINQ.