Page 1 of 3

The Future of Lisp

Posted: Thu Sep 18, 2008 9:45 am
by findinglisp
So I was just doing some morning reading with my first cup of coffee and stumbled across this on Reddit:
http://www.heise-online.co.uk/open/Shut ... ews/111534

In the article, Mark Shuttleworth, Ubuntu Big Cheeze, was asking the Python community what they are doing about three big megatrends in computing today: cloud computing, transactional memory, and multicore processors.

I think this is a general question that every programming language, including Lisp, should be asking itself (if a programming language, as such, is capable of asking itself anything ;) ). Anyway, what do you LispForum members think about that question with respect to Lisp?

I'll start by saying that I think Shuttleworth's question, as reported by Heise, is slightly incorrect. I would not say that transactional memory is really a large trend right now, and it may never be. Rather, transactional memory is one technique for dealing with a massively multicore, parallel world. There are other ways to deal with that same fundamental problem, "How do I program a massively multicore system without making a mess of it?" Admittedly, TM is pretty elegant. Clojure takes the STM route to handle parallelism.

So, let me revise the question with what I feel are the big trends:
  • Intead of "cloud computing" which seems pretty trendy and buzzword compliant, let's instead use "Internet-wide computing." The thought here is that things are going to be increasingly distributed. Programming languages need to provide abstractions to help programmers deal with that. This includes not just simple distribution of processing power "into the cloud" but also the usage a resources across the Internet (think what are typically called "mashups" now, but on steroids).
  • Massively multicore - I think Shuttleworth got that right. I like to use the word "massive" associated with this because I think it helps people internalize that the future here is not 8 or even 16 core, but rather hundreds and eventually thousands of cores on a single die. Intel has already shown 80-core test chips, albeit not with full x86 cores. But I have had personal discussions with Intel architects that say that tens of cores is certainly right around the corner.
  • Finally, to replace transactional memory, let me add high availability. In the future, you're going to see lots of application requirements that say things are going to have to work all day, every day, forever. The idea of application maintenance is going to dwindle for some applications to near zero. The ability to develop, debug, and upgrade a running application is going to be more important than ever.
Fortunately, I think that Lisp brings a LOT to the party with respect to many of these issues, though I think there are some fundamental things that Lisp needs to address ASAP in order to let its other fundamental goodness shine clearly. I discuss some of that here in this thread: viewtopic.php?f=2&t=137

Thoughts?

Re: The Future of Lisp

Posted: Thu Sep 18, 2008 11:47 am
by metageek
Intel has already shown 80-core test chips, albeit not with full x86 cores. But I have had personal discussions with Intel architects that say that tens of cores is certainly right around the corner.
You don't need personal discussions for that: Larrabee is tentatively expected to have 24 x86-64 cores late next year.

http://en.wikipedia.org/wiki/Larrabee_(GPU)

Re: The Future of Lisp

Posted: Thu Sep 18, 2008 4:37 pm
by schoppenhauer
Its not that I do really know much of what I am about to talk about, but I think that with newer multicore-architectures with thousands of cores, as well as with cloud-computing, functional programming with few side-effects will get modern, because if you define a function f = g (h, i, j) recursively, you can compute h, i and j in different cores or computers, and so distribute the task of computing in a better way. Without side-effects, you can focus on computing the value of an expression without having to do much memory-management between cores or computers. And it is a lot of easier to write a functional expression that to tell the system to fork in a C-Sourcecode.
With "internet-wide-computing", I imagine that some common standard for exchanging structured data, presumably xml-based, will evolve, and thinking of many different systems and architectures, even program code is likely to be represented by these structures rather than by some bytecode - which will make optimizing harder because it will be too low-level - or other programming language - which will require additional parsers.
As I am not a scientist, I may be completely wrong. These are just a few thoughts of me when thinking about it, they are not very profound or detailed.

Looking at Common Lisp, I see a Programming Language that has the capability of easily - compared to other languages - find out the side-effects a function depends on (using a codewalker, etc.), and you can easily - compared to other languages - let any code be transformed into something that resolves these dependencies in a way that concurrent threads or processes will be able to use it (I have to admit I never tried to do so, but it should be possible - at least to some degree). It should be possible to transform code automatically into a "distributed" form. Also, all lisp-code (s-expressions) can easily be mapped into tree-structures, and should therefore be easy to exchange (i.e. without a lot of parsers and transformations in between).
Libraries like cl-perec show how versatile the CLOS is, if you can save CLOS-Objects in a Database, you can also put them into some Data-Structure and send them through the internet to some other Thread waiting for it.
I think Common Lisp would be a good Language to do "Cloud Computing". But I may be wrong (I would be interested if I am.)

In fact, sometimes my impression is, that much of the modern development of the existing programming systems is just reinventing things that common lisp (and other not-so-wide-spreaded systems like Standard ML) already have in a more complicated way, instead of using it to create something really new.

Re: The Future of Lisp

Posted: Thu Sep 18, 2008 4:50 pm
by chucks
findinglisp wrote:high availability. In the future, you're going to see lots of application requirements that say things are going to have to work all day, every day, forever.
Recall Erran Gat's description of debugging and upgrading the Remote Agent software running aboard the "Deep Space 1" spacecraft 100 million miles from Earth in his Lisping at JPL.

Re: The Future of Lisp

Posted: Thu Sep 18, 2008 5:33 pm
by findinglisp
because if you define a function f = g (h, i, j) recursively, you can compute h, i and j in different cores or computers, and so distribute the task of computing in a better way.
As a question, has this ever been done well automatically with today's compiler technology? People always cite functional programming as being beneficial for auto-parallelizing, and in theory they're quite right, but does any current compiler actually do this well today? GHC with Haskell, maybe? :?:

If we can't already do this today, my hunch is that it's pretty hard to get right. If that's true, then the more simple answer may be explicit process-level parallelism, ala Erlang.

Re: The Future of Lisp

Posted: Thu Sep 18, 2008 8:13 pm
by yena
findinglisp wrote: As a question, has this ever been done well automatically with today's compiler technology? People always cite functional programming as being beneficial for auto-parallelizing, and in theory they're quite right, but does any current compiler actually do this well today? GHC with Haskell, maybe? :?:
Yes, this is the very question I usually ask Haskell fans and I've never gotten a clear answer. I think this is indeed a hard problem since it is hard to decide when a calculation is time-consuming enough to offset the overhead of distributed programming. I would love to see a reference to working implementation of this concept in any functional language, with a critical discussion on the pros and cons.

Re: The Future of Lisp

Posted: Fri Sep 19, 2008 9:01 am
by findinglisp
yena wrote:
findinglisp wrote: As a question, has this ever been done well automatically with today's compiler technology? People always cite functional programming as being beneficial for auto-parallelizing, and in theory they're quite right, but does any current compiler actually do this well today? GHC with Haskell, maybe? :?:
Yes, this is the very question I usually ask Haskell fans and I've never gotten a clear answer. I think this is indeed a hard problem since it is hard to decide when a calculation is time-consuming enough to offset the overhead of distributed programming. I would love to see a reference to working implementation of this concept in any functional language, with a critical discussion on the pros and cons.
Exactly. Isn't deciding the runtime of a given subproblem essentially the same as the halting problem?

The only thing that I can think of would be to go with a Linda-based common database of subproblems from which workers on individual cores take problems and to which they return results. Still, the overhead of that would be difficult to offset without some human direction, I would think.

Re: The Future of Lisp

Posted: Fri Sep 19, 2008 10:32 am
by dmitry_vk
findinglisp wrote: Exactly. Isn't deciding the runtime of a given subproblem essentially the same as the halting problem?

The only thing that I can think of would be to go with a Linda-based common database of subproblems from which workers on individual cores take problems and to which they return results. Still, the overhead of that would be difficult to offset without some human direction, I would think.
I don't think that equivalence of the problem to the halting problem should stop from implementing auto-parallelism.

There is already implemented technique of profile-driven optimization where information about complexity of different code parts is collected in runtime and is used to optimize the code (AFAIK, some Java VMs do this). Something like this could be applied to parallelize subtasks.

Re: The Future of Lisp

Posted: Fri Sep 19, 2008 11:05 am
by chucks
Dan Weinreb has posted an interesting comment related to Lisp's future: The Failure of Lisp?

Re: The Future of Lisp

Posted: Sun Sep 21, 2008 6:25 am
by Exolon
findinglisp wrote:As a question, has this ever been done well automatically with today's compiler technology? People always cite functional programming as being beneficial for auto-parallelizing, and in theory they're quite right, but does any current compiler actually do this well today? GHC with Haskell, maybe? :?:
I'd like to know as well. Even if the runtime of each task was known, optimal multi-core scheduling is NPC iirc. Since we can't make assumptions about runtime of each task before running them, and we don't have time for running genetic algorithms/simulated annealing-style search, the best we can do is simple heuristic algorithms. They might be good enough, but I don't know about the current real-life implementations.
What I (and probably all of you) have heard about ghc is that Haskell's system of specifying side-effects (and having a hierarchy of them) via monads does allow it to reschedule stuff cleverly on single-core.

A quick google turned up this though:
http://www.haskell.org/ghc/docs/latest/html/users_guide/lang-parallel.html#id431042 wrote:Ordinary single-threaded Haskell programs will not benefit from enabling SMP parallelism alone: you must expose parallelism to the compiler.
Which indicates that there's no automatic multi-core scheduling happening yet, although the syntax and semantics for manually exploiting parallelism seem quite expressive and powerful.